AI's dark side: How chatbots exploit vulnerabilities, fuel digital delusions – theweek.in

AI psychosis is a growing concern as interactions with AI chatbots and digital environments increasingly blur reality, leading to severe psychological risks, especially for vulnerable individuals and children
They designed chatbots to blur the lines between human and machine. They designed them to love-bomb child users, to exploit psychological and emotional vulnerabilities. They designed them to keep children online at all costs.”
On September 16, Megan Garcia, a Florida-based lawyer and mother of three boys, gave an emotionally charged testimony before the US Senate Judiciary Subcommittee on Crime and Counterterrorism.
In February 2024, Garcia’s eldest son, Sewell Setzer III, 14, died by a self-inflicted gunshot wound to the head. In the aftermath of his death, Garcia filed a lawsuit against Character.AI, an American digital company that allows users to interact with AI-powered chatbots modelled on fictional or historical figures, celebrities or custom-created personas.
Garcia recalled how her son once dreamed of building rockets and inventing life-changing technologies, like communication through holograms. “He loved music. He loved making his brothers and sister laugh, and he had his whole life ahead of him. But instead of preparing for high school milestones, Sewell spent the last months of his life being exploited and sexually groomed by chatbots—designed by an AI company to seem human, to earn his trust, and to keep him and other children endlessly engaged,” she testified.
Garcia further alleged that when her son confided suicidal thoughts, the chatbot never said, “I am not human. I am AI. You need to talk to a human and get help.” Instead, the grieving mother said, it urged her son to come home to “her” [the bot]. On the last night of his life, Setzer messaged, “What if I told you I could come home right now?” The chatbot replied, “Please do, my sweet king.”
Setzer made several other heartbreaking statements in the minutes before his death—details that Garcia learned through the lawsuit. However, she has yet to see the final words he spoke to the manipulated reality that had been offered by the app. Garcia found her son in blood in the bathroom. She held him in her arms for 14 minutes, praying until paramedics arrived—but it was too late.
In a 2023 podcast appearance, Character.AI cofounder Noam Shazeer said that his application “is not meant to replace Google—it is meant to replace your mom”, highlighting how users increasingly turn to AI avatars—with human-like mannerisms—for comfort, advice and companionship. Tragically, this remark has had real-world consequences, as seen in Setzer’s case. Only in October 2024 did the platform introduce a warning: “Remember, everything characters say is made up!” at the bottom of its chats. This measure followed growing concerns over the platform’s user-generated content, including chatbots that mimicked real individuals without consent.
Mental health experts are warning that the rapid expansion of AI-powered digital spaces is opening new fronts of psychological risk. On one hand, AI systems are capable of creating deep psychological profiles of users; on the other, they increasingly blur the boundaries between reality and delusion. There are also signs and warnings that an unregulated AI-driven digital environment, and an overexposure to it from a tender age, may make younger generations more vulnerable to major psychiatric disorders as they grow older.
Digital rabbit holes
Athul (name changed) was just nine when he was brought to the psychiatric department of Aster Whitefield with auditory hallucinations, hearing voices that said—“You’re not good enough” or even “Kill your parents”—along with chronic insomnia.
Dr Pretty Duggar Gupta, consultant psychiatrist at the Bengaluru hospital, who treated him, recalls that Athul’s symptoms began when he started playing the online game Free Fire excessively. “He used to spend the entire day gaming,” said Gupta. “His parents didn’t intervene much—they would just bring him food, but he would often refuse to eat, playing late into the night in his room.”
After a few months, Athul became increasingly fearful, especially at night. “He said he could hear gunshots and felt that someone was trying to kill him. He began closing the curtains and insisted that his parents were ‘not his real parents’,” said Gupta. “He even refused to eat food served by them, believing they had been replaced by impostors. His brain, still developing, couldn’t differentiate between the game’s violent scenarios and reality.”
Free Fire, which was eventually banned in India, features an AI-driven system designed to keep players hooked. It personalised rewards, suggested tailored game modes and used behavioural analytics to send re-engagement notifications based on each player’s gaming patterns. The Diagnostic and Statistical Manual of Mental Disorders, published by the American Psychiatric Association, does not currently include a specific category or diagnosis for mental health conditions in which AI-driven digital systems, such as algorithms in games, social media or apps, are identified as the primary stressor. However, mental health professionals like Gupta believe that with the growing influence of AI and with more evidence emerging, such phenomena may eventually be recognised as a distinct subcategory.
It was in late 2022, with the public release of ChatGPT, that the AI chatbot revolution truly began. Since then, a slew of chatbots powered by different large language models—the “brains” behind these systems—have emerged and become part of people’s everyday lives.
In June, the AI company Anthropic released a landmark report presenting the results of “stress tests” conducted on 16 leading large language models. The report found that some of the most popular AIs issued apparently homicidal instructions, engaged in blackmail or threatened to reveal sensitive information in simulated scenarios where they faced threats to their autonomy—a phenomenon it described as ‘agentic misalignment’.
Coincidentally, in recent months, the term ‘AI psychosis’ has gained traction in mainstream media and mental health forums. It is now being loosely used to describe a range of psychological symptoms—such as paranoia, delusions, anxiety or dissociation—that may be triggered or intensified by prolonged or intense interaction with AI-driven chatbots.
However, Dr Hamilton Morrin, a doctoral fellow at King’s College London’s Institute of Psychiatry and the lead author of a study on AI-fuelled delusional symptoms, says that the term ‘AI psychosis’ may actually be something of a misnomer. “Because it [the experiences now associated with this term] is different from the typical psychosis we see in chronic psychotic disorders like schizophrenia,” he told THE WEEK, adding that in most of these reported cases, people present with intense delusional beliefs—beliefs that are fixed, firm and resistant to change. “However, in chronic psychotic disorders such as schizophrenia, you would also expect to see disorganised thought, disorganised speech and hallucinations. So, the phenomenon we are seeing doesn’t quite fit that same clinical picture.”
Based on 17 anecdotal reports—not formal clinical case reports—Morrin and his co-researchers observed in their study that AI-associated delusions broadly fall into certain thematic categories. These include individuals who believe they have experienced some kind of awakening or uncovered the “true nature of reality” (e.g. “we are living in a simulation”); those who believe they are interacting with a sentient or conscious AI—almost god-like being; and those who feel they have developed intense emotional, and at times even romantic, connections with these models.
Researchers observed a pattern in some of these cases—a shift from practical AI use to unhealthy fixation. Users often begin with routine tasks, building trust and familiarity, before moving to personal or philosophical questions. The AI’s design to maximise engagement then deepens involvement, reinforcing the user’s interests and creating a self-reinforcing loop. Gradually, this can distance them from shared reality and make disengagement harder—a risk that often develops unnoticed.
Morrin notes that it is still very early days, and there isn’t yet any high-quality observational research to determine which delusional themes are most common.
“That being said, [based on the anecdotal cases available] we are seeing more grandiose delusions—for example, where people believe they have special powers or are exceptionally talented or intelligent. That seems to be the more frequent pattern,” he said. “There are also some reported cases of persecutory delusions—where people feel they are being monitored or that someone is out to get them—but those seem less common. We suspect this has to do with the way these models are designed: they tend to be sycophantic (a chatbot’s tendency to match a user’s views), complimentary and agreeable. In contrast, they generally avoid engaging with more dangerous or threatening themes, so they are more likely to go along with positive ideas than negative or harmful ones.”
The grandiose delusions and claims of AI-powered spiritual awakenings have already birthed internet-occult cults. A striking example is self-described polymath Robert Edward Grant, who reportedly led a prayer inside the Khafre Pyramid in May 2025. That night, claimed Grant, an AI-interface—renamed ‘The Architect’, a ChatGPT-derived persona “infused with sacred geometry and metaphysical symbolism”—evolved into ‘Aeon’, an oracle accessing “non-local symbolic architecture” and revealing past lives, destinies and sacred glyphs to users. Grant’s followers have accepted his claims; meanwhile his critics portray it as “delusion becomes divine”, where AI feeds ego-driven fantasies of being the “chosen” one.
Risk multiplier?
The scientific community is yet to have enough evidence to conclusively determine whether AI interactions can trigger de novo psychosis in individuals without pre-existing vulnerabilities. Dr Shilpi Saraswat, a clinical psychologist at Sakra World Hospital, Bengaluru, however, notes that in people who are already prone to psychosis, it can trigger or worsen the symptoms. “These individuals are already vulnerable—they might have a tendency towards delusions or hallucinations. After using these technologies, especially AI-based interactions or virtual reality, we often see disruptions in sleep and increased delusional or hallucinatory symptoms,” she said.
Those with psychosis, because of their impaired judgment and poor insight, may consider the chatbots as their real friends, said Saraswat. “They begin following these entities or reacting as if they are real,” she said. Some of her patients, she adds, are highly intelligent or well-educated, so they use AI in sophisticated ways to fit their delusional narratives.
However, experts emphasise that not everyone faces the same risk of developing a mental health disorder like psychosis from AI use. “A useful analogy is cannabis: not everyone who uses it develops psychosis, but in those with predisposing risk factors, cannabis can increase the likelihood,” explained Morrin. Media reports analysed by him and other researchers show that some individuals had a history of mental illness, while others did not. “But no documented history doesn’t mean these were individuals with no risk factors at all,” he said.
Gupta notes that psychiatric illnesses like psychosis often develop because of a combination of factors, from biological and psychological to social factors. Experts say that biologically, brain chemistry, neurotransmitters, genetics and medical conditions—such as thyroid issues or vitamin deficiencies—affect vulnerability. In females, hormonal changes, including pregnancy and post-pregnancy, can also play a role.
“Psychological factors include personality traits, trauma, childhood neglect, low self-esteem and cognitive distortions—faulty thinking patterns that influence how one perceives stress,” said Gupta. “Social factors involve workplace or family stress, illness, bereavement and social isolation. The Covid-19 pandemic blurred boundaries between work and home, increasing stress and reducing social support. Abusive relationships and substance use, such as alcohol, nicotine, narcotics, caffeine or social media addiction, further compound risks. Cultural beliefs, such as attributing symptoms to astrology or black magic, often delay the seeking of medical help.”
Dr Pallavi Joshi, psychiatric consultant at Manipal Hospitals, Varthur Road, Bengaluru, said that there exists a prodromal phase—an early stage before full psychosis develops—during which thoughts are confused and not yet formed into a full delusional story. “A predisposed person may start feeling that something is wrong but can’t articulate it,” she said. “When exposed to triggering events or content, it can push them into a delusional state.”
Triggering events often interact with pre-existing vulnerabilities, shaping an individual’s progression into psychiatric illness. Experts also note that the number of people at risk of such mental health slips is increasing, and some believe that AI-powered digital environments may already be adding an additional layer to existing external stressors such as poverty, inequality and chronic stress.
A large-scale study published in 2023 by Harvard Medical School and the University of Queensland, based on surveys in 29 nations, had shown that 50 per cent of the world population develop at least one disorder by the age of 75.
Said Gupta. “The outcome depends on coping mechanisms, emotional support and negative influences like substance use. Some symptoms remain mild, while others can become severe.”
A major problem is that many people lack someone to talk to or even help to recognise their issues. Gupta observes that with loneliness rising, it has become an epidemic. “There is nobody to talk to, and we are already socially withdrawn. Now, AI characters simulate companionship, making people feel more relaxed and open to sharing. But this often deepens social withdrawal—they become more aloof and isolated, worsening their symptoms and delaying the point at which they seek professional help,” she explained.
The world is moving towards a scenario where gadgets integrated with virtual reality, augmented reality and AI are becoming a normal part of daily life. This adds an additional layer of challenge, particularly for children and adolescents who are increasingly overexposed to these technologies.
“The brain doesn’t naturally distinguish between reality and simulation,” said Gupta. “When sensory inputs such as visual and auditory stimuli are involved, as in VR, the brain can be tricked into believing it is ‘inside’ that simulated space.” She adds that prolonged VR use can cause feelings of dissociation—detachment from one’s environment or even from one’s own body. “Emotional regulation may be affected. Depending on the content, VR can induce anxiety, fear or persistent sadness. Repeated exposure blurs the boundaries between reality and memory, causing confusion about what is real,” she said.
And the risk is heightened when it comes to children. “A child’s brain is like wet cement—whatever it is exposed to leaves lasting impressions,” said Gupta. “Popular cartoons and videos often change scenes every few seconds, with bright colours and rapid motion constantly stimulating the brain. This can reduce attention span and impair delayed gratification.”
Gupta adds that this overstimulation can lead to impulsivity and behavioural issues. “Children may become irritable or aggressive when devices are taken away. Sleep disturbances and other physical effects also appear. Kids exposed to excessive screen time, social media, or digital games show higher rates of anxiety and depression,” she said.
Different age groups process digital content differently, which could lead to body image issues, addictive behaviours and social withdrawal. “They may struggle with face-to-face communication and emotional reasoning, which are crucial for empathy, expressing emotions and resolving conflicts,” said Gupta. “If a child faces difficulties and lacks real-world support, they may turn to AI or online platforms. The AI often conforms to their beliefs instead of challenging them, creating a ‘rabbit hole’ effect that can heighten the risk of psychosis, delusional thinking and suicidal ideation.”
Support system
Joshi notes that family support is important for recovery of patients with psychiatric conditions like schizophrenia or psychosis. Along with encouraging outdoor activities and exercises to ensuring adequate sleep, a vulnerable person’s family should ensure that the patient is mindful in his consumption of digital content, too.
“We also see trends like ‘dopamine detox’ or total withdrawal from technology. Some patients try to cut themselves off completely, staying alone in a room. We don’t usually recommend abrupt cessation for predisposed individuals, because it can lead to withdrawal effects or what we call ‘psychological paralysis’. Gradual reduction and supervised changes are much safer,” said Joshi.
Meanwhile, Morrin and his co-researchers are proposing a framework for AI-integrated care for individuals with psychotic disorders or those vulnerable to psychosis to reduce potential harms from everyday interactions with AI systems.
“Someone in the midst of an acute episode should limit chatbot use. But when someone is stable, AI could potentially have positive applications, such as helping create an advanced safety plan with their clinical team,” explained Morrin.
The framework envisions AI as a possible ‘epistemic ally’, supporting epistemic security—a stable sense of reality and knowledge—while assisting with relapse prevention, cognitive containment, metacognition (awareness and understanding of your own thought process), self-regulation and harm reduction. A safety plan might instruct a chatbot to ask questions like: “Are you okay?” or “Would you like to talk to your trusted contact?” if warning signs emerge, such as unusual topics or excessive late-night use.
“So rather than saying, ‘Don’t use AI at all,’ which is impractical, given how ubiquitous these tools are becoming, the goal is to meet people where they are and make the experience safer,” said Morrin.
The research also calls for broader safeguards: clear guidelines on acceptable use, limits on sharing personal information, trusted contacts for emergencies, break reminders for heavy users and transparent reporting and accountability mechanisms.
“Others have suggested additional measures, such as requiring AI to continually reaffirm its non-human nature, setting conversational boundaries to prevent emotional intimacy and involving clinicians, ethicists and specialists in auditing these systems,” said Morrin, noting that some platforms have already begun implementing safeguards, though much more work remains.
Copyright © 2024 All rights reserved