Atlas browser exploit lets attackers hijack ChatGPT memory – csoonline.com

Days after cybersecurity analysts warned enterprises against installing OpenAI’s new Atlas browser, researchers have discovered a vulnerability that allows attackers to infect systems with malicious code, granting themselves access privileges, or deploy malware. The development raises immediate questions about the enterprise readiness of AI-native browsers.
The Atlas browser has come under scrutiny after researchers at LayerX Security revealed that attackers could exploit the flaw to inject malicious instructions directly into a user’s ChatGPT memory and potentially execute remote code.
To prevent attackers from exploiting the vulnerability, LayerX has reported the exploit to OpenAI without sharing any additional technical information.
The exploit works in five steps. In the first step, a user logs in to ChatGPT, and an authentication cookie or token is held in their browser. In the next step, the user clicks on a malicious link that leads them to a compromised web page, Or Eshed, co-founder and CEO of LayerX, explained in a blog post.
In the next step, the malicious page invokes a Cross-Site Request Forgery (CSRF) request to take advantage of the user’s pre-existing authentication into ChatGPT. In the fourth step, the CSRF exploit then injects hidden instructions into ChatGPT’s memory, without the user’s knowledge, thereby tainting the core LLM memory.
In the fifth step, the tainted memories are invoked when the user queries ChatGPT, allowing deployment of malicious code that can give attackers control over systems or code.
ChatGPT’s memory is a useful tool designed for the AI chatbot to remember details such as users’ queries, chat, and activities, preferences, style notes, and respond with personalized and relevant information.
“Memory is account‑level and persists across sessions, browsers, and devices, so a single successful lure follows the user from home to office and from personal to corporate contexts,” said Amit Jaju, global partner/senior managing director – India at Ankura Consulting. “In BYOD or mixed‑use environments, that persistence re‑triggers risky behaviors even after a reboot or browser change, expanding blast radius beyond a single endpoint. This is especially concerning where personal ChatGPT accounts are used for work tasks.”
Jaju added that adoption inside enterprises is currently very low. Atlas just launched, is macOS‑only, and enterprise access is off by default. So, exposure is limited to pilots and unsanctioned installations. But business workspaces have it available by default, so spillover into work use is plausible.
Detecting a memory-based compromise in ChatGPT Atlas is not like hunting for traditional malware. There are no files, registry keys, or executables to isolate. Instead, security teams need to look for behavioral anomalies such as subtle shifts in how the assistant responds, what it suggests, and when it does so.
“There are clues, but they sit outside the usual stack. For example, an assistant that suddenly starts offering scripts with outbound URLs, or one that begins anticipating user intent too accurately, may be relying on injected memory entries. When memory is compromised, the AI can act with unearned context. That should be a red flag,” said Sanchit Vir Gogia, CEO and chief analyst at Greyhound Research.
He added, from a forensic perspective, analysts need to pivot toward correlating browser logs, memory change timestamps, and prompt-response sequences. Exporting and parsing chat history is essential. SOC teams should pay close attention to sequences where users clicked on unknown links followed by unusual memory updates or AI-driven agent actions.
As this is not a plug-and-play detection problem, redemption and mitigation start with keeping Atlas disabled for the enterprise by default. In Business, it should be confined to tightly scoped pilots with non‑sensitive data. 
Jaju added that for monitoring, enterprises should add detections for AI‑suggested code, fetching remote payloads, unusual egress after ChatGPT usage, and session‑riding behaviors in SaaS. He also suggested enabling web filtering on newly registered or uncategorized domains.
The moment an Atlas user’s memory is compromised, the threat resides in the cloud-bound identity, not in any one machine. That is why the response must start with the account. Memory must be cleared. Credentials should be rotated. All recent chat history should be reviewed for signs of tampering, hidden logic, or manipulated task flow, noted Gogia.
Along with identifying the vulnerability, LayerX claimed that ChatGPT Atlas is also not equipped to stop phishing attacks. In the tests conducted by the company, ChatGPT Atlas had a failure rate of over 94%. Of the total 103 in-the-wild attacks, 97 attacks went through successfully.
The results were not very promising for other AI browsers, which the company tested last month. Perplexity’s Comet and Genspark were only able to stopped only 7% phishing attacks, while only Arc browser’s Dia was able to stop around 46% attacks. The traditional browsers, such as Edge and Chrome, on the other hand, are relatively well equipped and were able to stop about 50% of phishing attacks using their out-of-the-box protections.
Nidhi Singal is an independent journalist reporting on how emerging technologies reshape economies, companies, and countries. She has over 18 years’ experience covering everything from mobile telecommunications to enterprise technologies. She has also written for India Today, Business Today and Fortune India.
Sponsored Links

source

Jesse
https://playwithchatgtp.com