Thousands of ChatGPT conversations are appearing in Google search results – Computing UK
The issue stems from users clicking the “Share” button on ChatGPT to send conversations to friends, family, or colleagues via URL, unaware that some of these shared chats have become publicly searchable.
As uncovered by Fast Company, a simple Google site search using part of the link structure for shared ChatGPT chats yields nearly 4,500 results, many of which disclose personal stories involving addiction, trauma, or mental health crises.
Although this figure likely underestimates the real number due to Google’s indexing limitations, it is already alarming in scope.
In some conversations, users describe being survivors of abuse, experiencing suicidal ideation, or struggling with mental illnesses.
Others include discussions about childhood behavioural disorders or even fears about AI surveillance.
While ChatGPT doesn’t show users’ names, many identify themselves unintentionally by including detailed personal context, such as job titles, locations, or specific life events.
The privacy leak is particularly disturbing given that nearly half of Americans surveyed in the past year said they turned to large language models like ChatGPT for psychological support.
Three-quarters sought help managing anxiety, two-thirds looked for advice on deeply personal issues, and nearly 60% used the chatbot to cope with depression.
When users opt to share a ChatGPT conversation, OpenAI generates a unique URL. What many didn’t realise is that this sharing tool also includes an optional setting to allow the link to be indexed by search engines.
OpenAI says the search visibility setting is off by default and must be turned on manually.
“ChatGPT conversations are private unless you choose to share them,” an OpenAI spokesperson told Fast Company.
“Creating a link to share your chat also includes an option to make it visible in web searches. Shared chats are only visible in Google search if users explicitly select this option.”
But critics say that’s not good enough. The interface design, they argue, doesn’t clearly communicate the consequences of sharing or the scope of exposure involved.
In response to the backlash, OpenAI has removed the opt-in feature that allowed chats to become discoverable via Google.
However, the already-indexed conversations remain visible unless individually deleted by the user.
OpenAI isn’t alone in facing criticism. Meta came under fire earlier this year when user queries to its AI were broadcast in a public feed inside its apps.
Those, too, led to unexpected disclosures of private information.
The latest revelations only add to broader anxieties about tech firms’ attitudes toward data stewardship.
Further complicating the picture, OpenAI CEO Sam Altman warned last month that users should refrain from entering their most personal data into ChatGPT, since the company could be legally required to hand over chat logs.
Due to ongoing litigation with The New York Times over copyright concerns, OpenAI has paused its regular deletion cycle for chat histories.
US courts have compelled the company to retain all user data indefinitely until the matter is resolved. This means everything users have typed, including trade secrets, legal issues, and intimate personal disclosures, is now available to OpenAI’s internal legal teams and potentially subject to future discovery orders.