ChatGPT Vulnerability Exposed Underlying Cloud Infrastructure – SecurityWeek


Hi, what are you looking for?
A researcher found a way to exploit an SSRF vulnerability related to custom GPTs to obtain an Azure access token. 
By
Flipboard
Reddit
Whatsapp
Whatsapp
Email
A researcher has disclosed the details of a recently patched ChatGPT vulnerability that may have exposed some of the AI chatbot’s underlying cloud infrastructure.
Jacob Krut, a bug bounty hunter and security engineer at Open Security, discovered the vulnerability while working on creating a custom GPT —a personalized version of ChatGPT tailored to a specific purpose or area of expertise.
The researcher found the weakness in the ‘Actions’ section, where users define how the custom GPT can interact with external services via APIs. The feature relied on user-provided URLs that were not properly validated, allowing an attacker to conduct a server-side request forgery (SSRF) attack.
SSRF vulnerabilities can be exploited using specially crafted URLs to make unauthorized requests to internal network resources that the attacker would normally not be able to access. 
In the case of ChatGPT, Krut was able to exploit the vulnerability to query a local endpoint associated with the Azure Instance Metadata Service (IMDS), an Azure cloud platform component used for application configuration and management. 
The IMDS identity authenticates the service to other resources. By obtaining the ChatGPT Azure IMDS identity’s access token, the researcher could have gained access to the underlying Azure cloud infrastructure used by OpenAI.
The vulnerability was reported to OpenAI through its bug bounty program on the BugCrowd platform. The researcher said the vendor assigned it a ‘high severity’ rating and quickly patched it.  
It’s unclear if a bug bounty has been paid out for the security hole. In May, OpenAI started offering up to $100,000 for critical vulnerabilities, but the average payout in the past three months has been less than $800, and the highest publicly listed reward since May was $5,000.

Advertisement. Scroll to continue reading.

“This SSRF in ChatGPT’s Custom GPT Actions is a textbook example of how small validation gaps at the framework layer can cascade into cloud-level exposure and highlights the severity of this often-overlooked attack vector,” said Christopher Jess, senior R&D manager at application security firm Black Duck.
“SSRF has been in the OWASP Top 10 since 2021 because of precisely this potential blast radius: a single server-side request can pivot into internal services, metadata endpoints, and privileged cloud identities,” Jess added.
Related: ChatGPT Targeted in Server-Side Data Theft Attack
Related: Researchers Hack ChatGPT Memories and Web Search Features
Related: AI Sidebar Spoofing Puts ChatGPT Atlas, Perplexity Comet and Other Browsers at Risk
Related: ChatGPT Tricked Into Solving CAPTCHAs
Eduard Kovacs (@EduardKovacs) is the managing editor at SecurityWeek. He worked as a high school IT teacher before starting a career in journalism in 2011. Eduard holds a bachelor’s degree in industrial informatics and a master’s degree in computer techniques applied in electrical engineering.
Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.
Join to learn why legacy defenses fail against modern API attacks as well as the strategies organizations need to stay ahead of today’s evolving threats.
Designed for senior level cybersecurity leaders to discuss, share and learn innovative cybersecurity and risk management strategies.
Finastra has appointed Ali Khan as CDO, Matthew McCormack as CISO, and Sanjay Jain as CIO.
Axonius has appointed Klaus Moser as SVP of Global Sales and Ernesto Tey as VP of Global Partners and Alliances.
QuSecure has appointed Garfield Jones as SVP, Research and Technology Strategy.
Behavioral detection allows defenders to recognize activity patterns like privilege escalation, credential theft, and lateral movement—often ahead of encryption or data exfiltration.
When leaders redefine power as trust instead of control, teams unlock their potential — and organizations find their edge.
Agentic AI speeds operations, but requires clear goals, least privilege, auditability, red‑teaming, and human oversight to manage opacity, misalignment, and misuse.
To deploy AI tools securely and ethically, teams must balance innovation with accountability—establishing strong governance, upskilling developers, and enforcing rigorous code reviews.
If you are recruiting for a Field CISO, Field CTO, etc., or are looking to leverage a resource at your company in one of these roles, what are some things you should be aware of?
Flipboard
Reddit
Whatsapp
Whatsapp
Email
Got a confidential news tip? We want to hear from you.
Reach a large audience of enterprise cybersecurity professionals
Subscribe to the SecurityWeek Daily Briefing and get the latest content delivered to your inbox.
Copyright © 2025 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.
Subscribe to the SecurityWeek Email Briefing to stay informed on the latest cybersecurity news, threats, and expert insights. Unsubscribe at any time.


source

Jesse
https://playwithchatgtp.com