How LLM-like Models like ChatGPT patch the Security Gaps in SoC Functions – CybersecurityNews

The emergence of Large Language Models (LLMs) is transforming NLP, enhancing performance across NLG, NLU, and information retrieval tasks.
They are primarily excellent in text-related tasks like generation, summarization, translation, and reasoning, demonstrating remarkable mastership.
A group of cybersecurity analysts (Dipayan Saha, Shams Tarek, Katayoon Yahyaei, Sujan Kumar Saha, Jingbo Zhou, Mark Tehranipoor, and Farimah Farahmandi) from the Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, USA recently affirmed that LLM models like ChatGPT can patch the security gaps in SoC functions.
Implementing AI-Powered Email security solutions “Trustifi” can secure your business from today’s most dangerous email threats, such as Email Tracking, Blocking, Modifying, Phishing, Account Take Over, Business Email Compromise, Malware & Ransomware
The growing prevalence of system-on-chip (SoC) technology in various devices raises security concerns due to complex interactions among integrated IP cores, making SoCs vulnerable to threats like information leakage and access control violations.
The presence of third-party IPs, time-to-market pressures, and scalability issues challenge security verification for complex SoC designs. Current solutions struggle to keep up with evolving hardware threats and diverse designs.
Exploring LLMs in SoC security represents a promising opportunity to tackle complexity, diversity, and innovation.
LLMs have the potential to redefine security across domains through tailored learning, prompt engineering, and fidelity checks, with experts focusing on four key security tasks:-
Complex modern SoCs are prone to hidden vulnerabilities, and addressing bugs in the RTL design stage is crucial for cost-effective security verification, reads the paper published.
The Transformer model, introducing attention mechanisms and eliminating the need for recurrent or convolutional layers, paved the way for the evolution of language models. 
GPT-1, GPT-2, and GPT-3 pushed the boundaries of language modeling, while GPT-3.5 and GPT-4 further refined these capabilities, offering a range of models with varying token limits and optimizations.
From OpenAI’s ChatGPT to Google’s Bard and Baize to Anthropic’s Claude 2, Vicuna, and MosaicML’s MPT-Chat, recent advancements in LLMs highlight the pursuit of improved human-like text generation and extended capabilities.
Here below, we have mentioned all the research questions:-
GPT-3.5’s potential in embedding hardware vulnerabilities and CWEs is investigated due to the scarcity of databases in the hardware security domain.
In a study, security researchers assessed GPT-3.5 and GPT-4’s abilities to detect hardware Trojans in AES designs using different tests. GPT-3.5 showed limited knowledge and performance, while GPT-4 outperformed it with impressive accuracy. 
GPT-4’s ability highlights its potential as a valuable tool for hardware security assessments, offering advantages over traditional machine learning approaches. 
It addresses design dependencies and offers a more holistic analysis of hardware designs, improving Trojan detection.
Protect yourself from vulnerabilities using Patch Manager Plus to patch over 850 third-party applications quickly. Take advantage of the free trial to ensure 100% security.

source

Jesse
https://playwithchatgtp.com