People use AI for companionship much less than we’re led to believe – TechCrunch
Latest
AI
Amazon
Apps
Biotech & Health
Climate
Cloud Computing
Commerce
Crypto
Enterprise
EVs
Fintech
Fundraising
Gadgets
Gaming
Google
Government & Policy
Hardware
Instagram
Layoffs
Media & Entertainment
Meta
Microsoft
Privacy
Robotics
Security
Social
Space
Startups
TikTok
Transportation
Venture
Events
Startup Battlefield
StrictlyVC
Newsletters
Podcasts
Videos
Partner Content
TechCrunch Brand Studio
Crunchboard
Contact Us
The overabundance of attention paid to how people are turning to AI chatbots for emotional support, sometimes even striking up relationships, often leads one to think such behavior is commonplace.
A new report by Anthropic, which makes the popular AI chatbot Claude, reveals a different reality: In fact, people rarely seek out companionship from Claude and turn to the bot for emotional support and personal advice only 2.9% of the time.
“Companionship and roleplay combined comprise less than 0.5% of conversations,” the company highlighted in its report.
Anthropic says its study sought to unearth insights into the use of AI for “affective conversations,” which it defines as personal exchanges in which people talked to Claude for coaching, counseling, companionship, roleplay, or advice on relationships. Analyzing 4.5 million conversations that users had on the Claude Free and Pro tiers, the company said the vast majority of Claude usage is related to work or productivity, with people mostly using the chatbot for content creation.
That said, Anthropic found that people do use Claude more often for interpersonal advice, coaching, and counseling, with users most often asking for advice on improving mental health, personal and professional development, and studying communication and interpersonal skills.
However, the company notes that help-seeking conversations can sometimes turn into companionship-seeking in cases where the user is facing emotional or personal distress, such as existential dread or loneliness, or when they find it hard to make meaningful connections in their real life.
“We also noticed that in longer conversations, counseling or coaching conversations occasionally morph into companionship — despite that not being the original reason someone reached out,” Anthropic wrote, noting that extensive conversations (with over 50+ human messages) were not the norm.
Anthropic also highlighted other insights, like how Claude itself rarely resists users’ requests, except when its programming prevents it from broaching safety boundaries, like providing dangerous advice or supporting self-harm. Conversations also tend to become more positive over time when people seek coaching or advice from the bot, the company said.
The report is certainly interesting — it does a good job of reminding us yet again of just how much and how often AI tools are being used for purposes beyond work. Still, it’s important to remember that AI chatbots, across the board, are still very much a work in progress: They hallucinate, are known to readily provide wrong information or dangerous advice, and as Anthropic itself has acknowledged, may even resort to blackmail.
Topics
From seed to Series C and beyond — founders and VCs of all stages are heading to Boston. Be part of the conversation. Save up to $425 now and tap into powerful takeaways, peer insights, and game-changing connections.
Google launches Doppl, a new app that lets you visualize how an outfit might look on you
Meta hires key OpenAI researcher to work on AI reasoning models
Sam Altman comes out swinging at The New York Times
Apple’s Liquid Glass interface improves with release of iOS 26 Beta 2
OpenAI pulls promotional materials around Jony Ive deal due to court order
Why Danny Boyle shot ‘28 Years Later’ on iPhones
Cluely, a startup that helps ‘cheat on everything,’ raises $15M from a16z
© 2025 TechCrunch Media LLC.