AI-aided malvertising: How chatbots can help spread scams – WeLiveSecurity


Award-winning news, views, and insight from the ESET security community
Award-winning news, views, and insight from the ESET security community
Digital Security
Cybercriminals have tricked X’s AI chatbot into promoting phishing scams in a technique that has been nicknamed “Grokking”. Here’s what to know about it.
Phil Muncaster
13 Oct 2025  •  , 5 min. read
We’ve all heard about the dangers posed by social engineering. It’s one of the oldest tricks in the hackers’ book: psychologically manipulate a victim into handing over their information or installing malware. Up until now, this has been done mainly via a phishing email, text or phone call. But there’s a new tool in town: generative AI (GenAI).
In some circumstances, GenAI and large language models (LLMs) embedded into popular online services could be turned into unwitting accomplices for social engineering. Recently, security researchers warned of exactly this happening on X (formerly Twitter). If you hadn’t considered this a threat up to now, it’s time to treat any output from public-facing AI bots as untrusted.
AI is a social engineering threat in two ways. On the one hand, LLMs can be corralled into designing highly convincing phishing campaigns at scale, and creating deepfake audio and video to trick even the most skeptical user. But as X found out recently, there’s another, arguably more insidious threat: a technique that has been nicknamed “Grokking” (it’s not to be confused with the grokking phenomenon observed in machine learning, of course.)
In this attack campaign, threat actors circumvent X’s ban on links in promoted posts (designed to fight malvertising) by running video card posts featuring clickbait videos. They are able to embed their malicious link in the small “from” field below the video. But here’s where the interesting bit comes in: The malicious actors then ask X’s built-in GenAI bot Grok where the video is from. Grok reads the post, spots the tiny link and amplifies it in its answer.
 
Why is this technique dangerous?
This isn’t just an X/Grok problem. The same techniques could theoretically be applied to any GenAI tools/LLMs embedded into a trusted platform. It highlights the ingenuity of threat actors in finding a way to bypass security mechanisms. But also the risks users take when trusting the output of AI.
Prompt injection is a type of attack in which threat actors give GenAI bots malicious instructions disguised as legitimate user prompts. They can do this directly, by typing those instructions into a chat interface. Or indirectly, as per the Grok case.
In the latter, the malicious instruction is usually hidden in data that the model is then encouraged to process as part of a legitimate task. In this case, a malicious link was embedded in video metadata under the post, then Grok was asked “where is this video from?”.
Such attacks are on the rise. Analyst firm Gartner claimed recently that a third (32%) of organizations had experienced prompt injection over the past year. Unfortunately, there are many other potential scenarios in which something similar to the Grok/X use case could occur.
Consider the following:
There really is an unlimited number of variations on this threat. Your number one takeaway should be never to blindly trust the output of any GenAI tool. You simply can’t assume that the LLM has not been tricked by a resourceful threat actor.
They are banking on you to do so. But as we’ve seen, malicious prompts can be hidden from view – in white text, metadata or even Unicode characters. Any GenAI that searches publicly available data to provide you with answers is also vulnerable to processing data that is “poisoned” to generate malicious content.
Also consider the following:
Embedded AI tools have opened up a new front in the long-running war against phishing. Make sure you don’t fall for it. Always question, and never assume it has the right answers.
eset-ai-native-prevention
Sign up for our newsletters
Digital Security
The hidden risks of browser extensions – and how to stay safe
Digital Security
The hidden risks of browser extensions – and how to stay safe

Digital Security
Rogue CAPTCHAs: Look out for phony verification pages spreading malware
Digital Security
Rogue CAPTCHAs: Look out for phony verification pages spreading malware

Digital Security
Don’t let dormant accounts become a doorway for cybercriminals
Digital Security
Don’t let dormant accounts become a doorway for cybercriminals

Malware
Will ChatGPT start writing killer malware?
Digital Security
Meet “AI”, your new colleague: Could it expose your company’s secrets?
Award-winning news, views, and insight from the ESET security community

source

Jesse
https://playwithchatgtp.com