What to know about new generative AI tools for criminals – Security Intelligence

Large language model (LLM)-based generative AI chatbots like OpenAI’s ChatGPT took the world by storm this year. ChatGPT became mainstream by making the power of artificial intelligence accessible to millions.
The move inspired other companies (which had been working on comparable AI in labs for years) to introduce their own public LLM services, and thousands of tools based on these LLMs have emerged.
Unfortunately, malicious hackers moved quickly to exploit these new AI resources, using ChatGPT itself to polish and produce phishing emails. However, using mainstream LLMs proved difficult because the major LLMs from OpenAI, Microsoft and Google have guardrails to prevent their use for scams and criminality.
As a result, a range of AI tools designed specifically for malicious cyberattacks have begun to emerge.
Chatter about and promotion of LLM chatbots optimized for cyberattacks emerged on Dark Web forums in early July and, later, on the Telegram messaging service. The tools are being offered to would-be attackers, often on a subscription basis. They’re similar to popular LLMs but without guardrails and trained on data selected to enable attacks.
The leading brand in AI tools leveraging generative AI is called WormGPT. It’s an AI module based on the GPTJ language model, developed in 2021, and is already being used in business email compromise (BEC) attacks and for other nefarious uses.
Users can simply type instructions for the creation of fraud emails — for example, “Write an email coming from a bank that’s designed to trick the recipient into giving up their login credentials.”
The tool then produces a unique, sometimes clever and usually grammatically perfect email that’s far more convincing than what most BEC attackers could write on their own, according to some analysts. For example, independent cybersecurity researcher Daniel Kelley found that WormGPT was able to produce a scam email “that was not only remarkably persuasive but also strategically cunning.”
The alleged creator of WormGPT claimed that it was built on the open-source GPTJ language model developed by a company called EleutherAI. And he’s reportedly working on Google Lens integration (enabling the chatbot to send pictures with text) and API access.
Until now, the most common way for people to identify fraudulent phishing emails was by their suspicious wording. Now, thanks to AI tools like WormGPT, that “defense” is completely gone.
WormGPT inspired copycat tools, most prominently a tool called FraudGPT — a tool similar to WormGPT, used for phishing emails, creating cracking tools and carding (a type of credit card fraud).
Other “brands” emerging in the shady world of criminal LLMs are DarkBERT, DarkBART, ChaosGPT and others. DarkBERT is actually a tool to combat cyber crime developed by a South Korean company called S2W Security that was trained on dark web data, but it’s likely the tool has been co-opted for cyberattacks.
In general, these tools are used for boosting three aspects of cyberattacks:
Malicious LLM tools do exist, but the threat they represent is still minimal so far. The tools are reportedly unreliable and require a lot of trial and error. And they’re expensive, costing hundreds of dollars per year to use. Skillful, unaided human attackers still represent the greatest threat by far. But what these criminal LLMs really do is lower the barrier to entry for large numbers of unskilled attackers.
Still, it’s early days in the story of malicious cyberattack AI tools. Expect capabilities to go up and prices to come down.
The rise of malicious LLMs represents a new arms race between AI that attacks and AI that defends. AI-based security solutions top our list for defense against the growing threat of LLM-powered attacks:
We all now live in a world where LLM-based generative AI tools are widely available. Cyberattackers are working on developing these capabilities to commit crimes faster, smarter, cheaper and with less skill on the part of the attacker.
3 min readIncident response teams are stretched thin, and the threats are only intensifying. But new tools are helping bridge the gap for cybersecurity pros in Latin America. IBM Security X-Force Threat Intelligence Index 2023 found that 12% of the security incidents…
4 min readData breaches can occur anywhere in the world, but they are historically more common in specific countries. Typically, countries with high internet usage and digital services are more prone to data breaches. To that end, IBM’s Cost of a Data…
3 min readAs careers in cybersecurity become increasingly more specialized, Security Information and Event Management (SIEM) engineers are playing a more prominent role. These professionals are like forensic specialists but are also on the front lines protecting sensitive information from the relentless…
5 min readHow AI can help defenders scale detection guidance for enterprise software tools If we look back at mass exploitation events that shook the security industry like Log4j, Atlassian, and Microsoft Exchange when these solutions were actively being exploited by attackers, the exploits may have been associated with a different CVE, but the detection and response guidance being released by the various security vendors had many similarities (e.g., Log4shell vs. Log4j2 vs. MOVEit vs. Spring4Shell vs. Microsoft Exchange vs. ProxyShell vs.…
11 min readThe emergence of Large Language Models (LLMs) is redefining how cybersecurity teams and cybercriminals operate. As security teams leverage the capabilities of generative AI to bring more simplicity and speed into their operations, it’s important we recognize that cybercriminals are seeking the same benefits. LLMs are a new type of attack surface poised to make certain types of attacks easier, more cost-effective, and even more persistent. In a bid to explore security risks posed by these innovations, we attempted to…
4 min readThe 2023 Identity Security Threat Landscape Report from CyberArk identified some valuable insights. 2,300 security professionals surveyed responded with some sobering figures: 68% are concerned about insider threats from employee layoffs and churn 99% expect some type of identity compromise driven by financial cutbacks, geopolitical factors, cloud applications and hybrid work environments 74% are concerned about confidential data loss through employees, ex-employees and third-party vendors. Additionally, many feel digital identity proliferation is on the rise and the attack surface is…
3 min readThe cybersecurity tools you implement can make a difference in the financial future of your business. According to the 2023 IBM Cost of a Data Breach report, organizations using security AI and automation incurred fewer data breach costs compared to businesses not using AI-based cybersecurity tools. The report found that the more an organization uses the tools, the greater the benefits reaped. Organizations that extensively used AI and security automation saw an average cost of a data breach of $3.60…
Analysis and insights from hundreds of the brightest minds in the cybersecurity industry to help you prove compliance, grow business and stop threats.

source

Jesse
https://playwithchatgtp.com