Cybercriminals Develop Malicious Versions of OpenAI's ChatGPT – Fagen wasanni

Fagen Wasanni Technologies
All about XXI Century Technologies
Fagen Wasanni Technologies
Cybercriminals have quickly created their own versions of OpenAI’s ChatGPT chatbot, potentially enhancing their ability to create malware and phishing emails to deceive people into revealing personal information. Criminals on dark-web forums and marketplaces have been promoting two large language models (LLMs) that they claim to have developed. These chatbots, known as WormGPT and FraudGPT, generate text responses to user prompts. However, unlike legitimate LLMs, these versions are marketed for illegal activities.
Security researchers have raised questions about the legitimacy of these chatbots, considering the lack of trustworthiness of cybercriminals. It is possible that they are attempting to scam one another. Nevertheless, the emergence of these malicious chatbots takes advantage of the hype around generative AI, which is currently being exploited by scammers.
WormGPT is a chatbot that claims to offer unlimited character count and code formatting, making it particularly useful for phishing scams. In a test, the system was able to generate a persuasive and cunning email for a business email compromise scam. The developer claims that WormGPT is based on the GPTJ language model, developed by AI research group EleutherAI in 2021.
FraudGPT, on the other hand, boasts the ability to create undetectable malware, identify leaks and vulnerabilities, and craft text for online scams. The developer has advertised the product on dark-web forums and Telegram channels, including a video demonstration of the chatbot generating a scammy email.
While it is difficult to verify the authenticity of these chatbots, there are indications that WormGPT may be a legitimate tool, based on the feedback from users. However, there is no evidence to suggest that these systems surpass the capabilities of existing commercial LLMs.
The emergence of malicious chatbots is not surprising, as cybercriminals are constantly seeking to incorporate new technologies into their illegal activities. Law enforcement agencies have already warned about the potential use of generative AI by cybercriminals, as it can facilitate fraud, impersonation, and social engineering attacks.
Scammers have already taken advantage of the popularity of generative AI systems by tricking individuals into downloading password-stealing malware through fake advertisements. Additionally, some scammers have accidentally included generated text in their messages, exposing their fraudulent activities.
As the chatbot landscape evolves, it is crucial for security measures to be in place to counter the malicious use of these technologies.

source

Jesse
https://playwithchatgtp.com