Search Engines are Indexing ChatGPT Conversations! – Here is our OSINT Research – CyberSecurityNews
ChatGPT shared conversations are being indexed by major search engines, effectively turning private exchanges into publicly discoverable content accessible to millions of users worldwide.
The issue first came to light through investigative reporting by Fast Company, which revealed that nearly 4,500 ChatGPT conversations were appearing in Google search results.
The discovery was made using a simple but powerful Google dorking technique: searching for site:chatgpt.com/share
followed by specific keywords.
This basic OSINT methodology exposed a treasure trove of supposedly private conversations, ranging from mundane queries about home renovations to deeply personal discussions about mental health, addiction struggles, and traumatic experiences.
What makes this discovery particularly alarming is that users who clicked ChatGPT’s “Share” button likely expected their conversations to remain within a limited circle of friends, colleagues, or family members. Instead, these exchanges became searchable content indexed by the world’s most powerful search engines.
ChatGPT’s sharing feature, introduced in May 2023, allowed users to generate unique URLs for their conversations. When users clicked the “Share” button, they could create a public link and, crucially, had the option to check a box labeled “Make this chat discoverable,” which would allow it to appear in web searches.
While this required deliberate user action, many users appeared unaware of the broader implications of enabling this feature.
The mechanism was straightforward: once a conversation was marked as discoverable, search engine crawlers could index the content just like any other publicly accessible webpage.
The shared links followed a predictable URL structure (chatgpt.com/share/[unique-identifier]
), making them easily discoverable through targeted search queries.
The Cybersecurity News team investigated this issue to determine the behavior of users on search engines. The included screenshots reveal fascinating differences in how major search engines handle ChatGPT content indexing.
Google: By August 2025, Google had stopped mainly returning results for ChatGPT shared conversations, showing “Your search did not match any documents” for most queries.
Bing: Microsoft’s search engine showed minimal results, displaying only limited amounts of indexed ChatGPT conversations.
DuckDuckGo: Perhaps most surprisingly, DuckDuckGo continued to surface comprehensive results from ChatGPT conversations, effectively becoming the primary gateway for accessing this content. This privacy-focused search engine ironically became the most effective tool for discovering supposedly private AI conversations.
For Open Source Intelligence (OSINT) researchers, this discovery represented an unprecedented opportunity. Security professionals quickly recognized that indexed ChatGPT conversations could provide “exactly what your audience struggles with” and “questions they’re too embarrassed to ask publicly”.
The conversations revealed authentic, unfiltered insights into human behavior, business strategies, and sensitive information that traditional OSINT methods might never uncover.
Cybersecurity experts noted that the exposed conversations included source code, proprietary business information, personal identifiable information (PII), and even passwords embedded in code snippets.
Research from Cyberhaven Labs found that 5.6% of knowledge workers had used ChatGPT in the workplace, with 4.9% providing company data to the platform.
Recognizing the severity of the privacy implications, OpenAI acted quickly to address the issue. On August 1, 2025, the company’s Chief Information Security Officer, Dane Stuckey, announced the removal of the discoverable feature: “We just removed a feature from ChatGPT that allowed users to make their conversations discoverable by search engines, such as Google”.
OpenAI characterized the feature as “a short-lived experiment to help people discover useful conversations,” but acknowledged that it “introduced too many opportunities for folks to accidentally share things they didn’t intend to”. The company also committed to working with search engines to remove already-indexed content from search results.
This incident highlights a fundamental challenge in the AI era: the gap between user expectations and technical reality. Many users assume their interactions with AI chatbots are private, but features like sharing, logging, and model training can create unexpected pathways for data exposure.
While OpenAI has addressed this specific vulnerability, the incident reveals broader systemic issues about data handling, user consent, and the unintended consequences of AI integration.
For cybersecurity professionals, this finding shows that they need to keep a close watch on AI platforms and how they handle data.
For users, it serves as a reminder of a key rule for online safety: never enter information into AI systems that you wouldn’t want everyone to see.
As AI continues to permeate every aspect of our digital lives, incidents like this will likely become more common, making robust privacy frameworks and user education more critical than ever.Integrate ANY.RUN TI Lookup with your SIEM or SOAR To Analyses Advanced Threats -> Try 50 Free Trial Searches
Cyber Security News is a Dedicated News Platform For Cyber News, Cyber Attack News, Hacking News & Vulnerability Analysis.
© Copyright 2025 – Cyber Security News