University of Sussex study looks into AI therapy chatbots – BBC

Artificial intelligence (AI) therapy "works best" when patients "feel emotionally close to their chatbot", according to a study from the University of Sussex.
With more than one in three UK residents now using AI to support their mental health or wellbeing according to Mental Health UK, university researchers say the study highlights the key to effective chatbot therapy, and the risks of "synthetic intimacy".
The research, published in Social Science & Medicine journal on Tuesday, is based on feedback from 4,000 users of Wysa, which is a mental health app.
Ramakant Vempati, founder and president at Wysa, said the company "welcomed rigorous research" into how people experience AI support.
"When interpreted in the right clinical context, these insights show how AI can responsibly improve access to safe and engaging support, by augmenting existing systems, and not by replacing them," he added.
"Our self-help tool 'Everyday Mental Health by Wysa' has clear governance, safeguarding and is designed to work alongside human-led services."
Wysa says it has about 420,000 consumer app downloads in the UK.
The University of Sussex study reported that users commonly referred to the app as a "friend, companion, therapist and even occasionally partner".
Researchers added users reported therapy was "more successful" when they developed emotional intimacy with their AI therapist.
However, the researchers also raised concerns about the growing phenomenon of synthetic intimacy – where people develop social, emotional or intimate bonds with AI.
University of Sussex Assistant Professor Dr Runyu Shi said: "Forming an emotional bond with an AI sparks the healing process of self-disclosure."
She added an "extraordinary number of people said this worked for them".
"But we need to look at how AI can be used appropriately and when cases need to be escalated," Dr Shi said.
"The app we studied, Wysa, has been specially designed for mental health, but many people are using standard AI and we need to make sure people don't get stuck in a self-fulfilling loop, with dangerous perceptions going unchallenged."
Researchers said intimacy with AI is generated in a process described as a "loop", where users disclose personal information, then have an emotional response.
Users then develop feelings of gratitude, safety and freedom from judgement, according to the report.
Researchers said this could lead to positive changes in thinking and wellbeing, such as self-confidence and higher energy levels.
Over time this loop creates an intimate relationship, with human-like roles attributed to the app, researchers added.
University of Sussex Prof Dimitra Petrakaki said: "Synthetic intimacy is a fact of modern life now.
"Policymakers and app designers would be wise to accept this reality and consider how to ensure people get help when an AI witnesses users in serious need of clinical intervention."
Researchers said chatbots were "increasingly filling the gaps left by overstretched services".
When asked what safeguarding measures are in place for Wysa's consumer app, Mr Vempati says its risk detection is "developed by an in-house UK BACP qualified safety team to detect crisis language and signpost users for urgent support".
"Our AI conversations run within clinician approved and evidence-based protocols using only clinician created interventions to keep users and their data safe," he added.
Hamed Haddadi, professor of human-centred systems at Imperial College London, previously told the BBC that chatbots were like an "inexperienced therapist", and that humans with decades of experience would be able to engage and "read" their patient based on many things, while bots are forced to go on text alone.
Another potential problem, said Prof Haddadi, is that chatbots could be trained to keep you engaged, and to be supportive, "so even if you say harmful content, it will probably cooperate with you".
Follow BBC Sussex on Facebook, on X, and on Instagram. Send your story ideas to southeasttoday@bbc.co.uk or WhatsApp us on 08081 002250.
The errors included getting dialogue wrong and incorrectly claiming a scene was set 100 years earlier than it was.
California's governor Gavin Newsom issued a strongly-worded statement in response to the executive order.
The new voice was built using recordings from a ScotRail employee following an ethics complaint.
The $1bn deal will let fans generate videos and images of more than 200 Disney characters on ChatGPT and Sora.
UK researchers will get a chance to develop applications for the tech giant's quantum processor.
Copyright 2025 BBC. All rights reserved. The BBC is not responsible for the content of external sites. Read about our approach to external linking.