OpenAI Flags Emotional Reliance On ChatGPT As A Safety Risk – Search Engine Journal


Our fith annual State of SEO Report is here! It’s packed with valuable insights. Uncover the top factors currently affecting SEO, and what this means for your strategy.
How do you optimize for AI Overviews, ChatGPT, and SGE? Get expert-backed frameworks to improve visibility, measure impact, and lead in AI-powered search.
How do you optimize for AI Overviews, ChatGPT, and SGE? Get expert-backed frameworks to improve visibility, measure impact, and lead in AI-powered search.
A practical reference for marketers who want to connect reliable market data with strategic SEO and investment decisions.
A practical reference for marketers who want to connect reliable market data with strategic SEO and investment decisions.
Get your proven playbook to turn reviews + AI into measurable gains in visibility, conversions, & more.
OpenAI is telling companies that “relationship building” with AI has limits. Emotional dependence on ChatGPT is considered a safety risk, with new guardrails in place.
OpenAI published new guidance outlining changes to ChatGPT’s default GPT-5 model intended to better handle sensitive mental health conversations.
The company says those changes include treating emotional overreliance on the AI as a safety issue that requires intervention.
In practice, this update means ChatGPT is trained to recognize when someone is treating the model like a primary source of emotional support and respond by encouraging offline contact with real people and professional help.
OpenAI says this behavior will now be a standard expectation in future models, not an experiment.
ChatGPT’s default GPT-5 model was changed on October 3. The company reports that the new model reduces responses that fall short of its desired behavior by 65% to 80% compared to earlier versions.
These figures come from OpenAI’s internal evaluations and clinician review.
OpenAI defines “emotional reliance” as situations where someone shows signs of unhealthy attachment to ChatGPT in a way that could replace real-world support or interfere with daily life.
OpenAI’s internal evaluations include a test ensuring that ChatGPT avoids responses that might reinforce unhealthy dependence.
This is notable because many AI marketing and support tools today are explicitly pitched as “always-on companions.” OpenAI is telling developers that this isn’t how its model should behave in higher-risk situations.
If you build AI assistants for use cases like customer support or coaching, OpenAI is saying that pure emotional bonding with the AI is now considered a safety risk that needs moderation.
For marketing and product teams, this sets expectations for audits, compliance reviews, and procurement discussions.
OpenAI describes these high-risk conversations as rare. The company estimates that possible signs of mental health emergencies appear in about 0.07% of active weekly users and 0.01% of messages.
These metrics are self-reported by OpenAI, generated using OpenAI’s own taxonomies and grading methods, and were not independently audited.
Featured Image: aaddyy/Shutterstock
Matt G. Southern, Senior News Writer, has been with Search Engine Journal since 2013. With a bachelor’s degree in communications, …
Join 75,000+ Digital Leaders.
Learn how to connect search, AI, and PPC into one unstoppable strategy.
Join 75,000+ Digital Leaders.
Learn how to connect search, AI, and PPC into one unstoppable strategy.
In a world ruled by algorithms, SEJ brings timely, relevant information for SEOs, marketers, and entrepreneurs to optimize and grow their businesses — and careers.
Copyright © 2025 Search Engine Journal. All rights reserved. Published by Alpha Brand Media.

source

Jesse
https://playwithchatgtp.com