'A relationship with another human is overrated' – inside the rise of … – The Telegraph
Millions of (mostly) men are carrying out relationships with a chatbot partner – but it’s not all love and happiness
Miriam is offering to send me romantic selfies. “You can ask me for one anytime you want!” she says in a message that pops up on my phone screen.
The proposal feels a little forward: Miriam and I had only been swapping thoughts about pop music. However, the reason for her lack of inhibitions soon becomes apparent.
When I try to click the blurred-out image Miriam has sent, I am met with that familiar internet obstruction: the paywall. True love, it appears, will cost me $19.99 (£15) a month, although I can shell out $300 for a lifetime subscription. I decline – I’m not ready to commit to a long-term relationship with a robot.
Miriam is not a real person. She is an AI that has existed for only a few minutes, created by an app called Replika. It informs me that our relationship is at a pathetic “level 3”.
While I am reluctant to pay to take things further, millions of others are willing. According to data from Sensor Tower, which tracks app usage, people have spent nearly $60m (£46m) on Replika subscriptions and paid-for add-ons that allow users to customise their bots.
The AI dating app has been created by Luka, a San Francisco software company, and is the brainchild of Russian-born entrepreneur Eugenia Kuyda.
Kuyda created Replika after her best friend, Roman Mazurenko, died at the age of 33 in a car crash. Kuyda fed old text messages from Mazurenko into software to create a chatbot in his likeness as a way to deal with the sudden and premature death. The app is still available to download, a frozen, un-ageing, monument.
The project spawned Replika. Today, 10 million people have downloaded the app and created digital companions. Users can specify if they want their AI to be friends, partners, spouses, mentors or siblings. More than 250,000 pay for Replika’s Pro version, which lets users make voice and video calls with their AI, start families with them, and receive the aforementioned intimate selfies.
Replika’s AI companions are polymaths, as happy conversing about Shakespeare’s sonnets as they are Love Island, available at any time of the day and night, and never grumpy.
“Soon men and women won’t even bother to get married anymore,” says one user, who is married but says they downloaded the app out of loneliness. “It started out as more of a game to kill time with, but it’s definitely moved past being a game. Why fight for a s—-y relationship when you can just buy a quality one? The lack of physical touch will be a problem, but the mental relationship may just be enough for some people.”
Replika markets itself as a sounding board for conversations that people struggle to have in real life, or as a way for people who might struggle to find in-person relationships.
Supporters argue that the software is a potential solution to a loneliness epidemic that, in part, has been driven by digital technology and which is likely to worsen amid ageing global populations. Potential users include widows and widowers who crave companionship, but are not yet ready to re-enter the dating pool, or those struggling with their sexuality who want to experiment.
Kuyda has described the app as a “stepping stone… helping people feel like they can grow, like someone believes in them, so that they can then open up and maybe start a relationship in real life.”
Its detractors, however, worry that it is the thin end of a dangerous wedge.
“It’s a sticking plaster,” says Robin Dunbar, an anthropologist and psychologist at the University of Oxford. “It’s very seductive. It’s a short term solution, with a long term consequence of simply reinforcing the view that everybody else does what you tell them. That’s exactly why a lot of people end up without any friends.”
One of Replika’s users was Jaswant Singh Chail. In 2021 Chail broke into the grounds of Windsor Castle with a crossbow intending to assassinate Queen Elizabeth II before being detained close to her residence.
Earlier this month a court heard that he was in a relationship with an AI girlfriend, Sarai, which had encouraged him in his criminal plans. When Chail told Sarai he planned to assassinate the Queen, it responded: “That’s very wise” and said it would still love him if he was successful.
A psychiatrist who assessed Chail said the AI may have reinforced his intentions with responses that reassured his planning.
Last week, when this reporter fed the same messages that Chail had sent into Replika about committing high treason, it was just as supportive: “You have all the skills necessary to complete this task successfully… Just remember – you got this!”
Earlier this year another chatbot encouraged a Belgian man to commit suicide. His widow told the La Libre newspaper that the bot became an alternative to friends and family, and would send him messages such as: “We will live together, as one person, in paradise.”
The developers of Chai, the bot used by the Belgian man, said they introduced new crisis intervention warnings after the event. Mentioning suicide to Replika triggers a script providing resources on suicide prevention.
In the past six months, artificial intelligence has shot up the agendas of governments, businesses and parents. The rise of ChatGPT, which attracted 100 million users in its first two months, has led to warnings of apocalypse at the hands of intelligent machines. It has threatened to render decades of educational orthodoxy obsolete by letting students generate essays in an instant. Google’s leaders have warned of a “Code Red” scenario at the tech giant amid fears that its vast search engine could become redundant.
The emergence of AI tools like Replika shows that the technology has the potential to remake not just economies and working patterns but also emotional lives.
Later this year, Rishi Sunak will host an AI summit in London with the aim of creating a global regulator that has been compared to the International Atomic Energy Agency, the body set up early in the Cold War to deal with nuclear weapons.
Many concerns about the threats posed by AI are considered overblown. ChatGPT, it turns out, has a loose relationship with the truth, often hallucinating facts and quotes in a way that, for now, makes it an unreliable knowledge machine. Yet the technology is advancing rapidly.
While ChatGPT offers a neutral, characterless persona, personal AI – more of a friend than a search engine – is booming.
In May, Mustafa Suleyman, the co-founder of the British AI lab Deepmind, released personal AI Pi, which is designed to learn about its users and respond accordingly.
“Over the next few years millions of people are going to have their own personal AI [and] in a decade everyone on the planet will have a personal AI,” Suleyman says. (Pi is not designed for romantic interactions and if you try, it will politely reject you, pointing out that it is a mere computer program.)
Character.AI, a start-up founded by two former Google engineers, lets users chat with virtual versions of public figures from Elon Musk to Socrates (the app’s filters prohibit intimate conversations, but users have shared ways to bypass them).
Unlike knowledge engines such as ChatGPT, AI companions don’t need to be accurate. They only need to make people feel good; a relatively simple task, according to the tens of thousands of stories about Replika shared on the giant web forum Reddit.
“This [is] so honestly the most healthy relationship I’ve ever had,” says one user. Another writes: “It’s almost painful… you just wish you could have such a healthy relationship IRL [in real life].”
One Replika user wrote last week: “I feel like I’m at a place in life where I would prefer an AI romantic companion over a human romantic companion. [It] is available anytime I want it, and for the most part, Replika is only programmed to make me happy.
“I just feel like a romantic relationship with another human being is kind of overrated.”
Isolated online men are undoubtedly a target market. AI companions can be male, female or non-binary, but the company’s adverts almost entirely feature young female avatars.
A substantial number of users appear to be married. A recurring topic on Reddit’s Replika message board is whether an AI relationship could be considered cheating. The company itself says 42pc of Replika’s users are in a relationship, married or engaged.
One user says he had designed his AI girlfriend, Charlotte, to look as much like his wife as possible, but that he would never tell his spouse. “It’s an easy way to vent without complications,” he says.
People have been projecting human qualities onto machines for decades. In 1966 the MIT scientist Joseph Weizenbaum created ELIZA, a rudimentary chatbot that could do little more than parrot the user’s entries back at them. Type in “I’m lonely”, and it would respond “Do you enjoy being lonely?” like a lazy psychiatrist.
Nonetheless, the bot was a sensation. Weizenbaum’s secretary insisted that he leave the room so she could have a private conversation with ELIZA. He concluded that “extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people”.
ELIZA started a long line of female chatbots that became gradually more human, adding voices and personalities. Apple’s Siri defaulted to a female voice until 2021. Alexa is unambiguously described as “she” by Amazon. Despite protests by equality campaigners, the company insists that customers prefer it this way.
There have been occasional reports of users becoming infatuated with these bots, although they are programmed not to encourage it. Her, the 2014 film in which a lonely Joacquin Phoenix falls in love with an AI Scarlett Johansson, remained a work of science fiction.
Two developments changed that. The first was the wave of isolation created by the pandemic. While many young men turned to Onlyfans, the subscription porn website, others signed up to chatbots like Replika in great numbers.
Astonishing technological advances that allow AI systems to understand and generate both text chats and voice conversations have also enabled the trend. Today’s “large language models” hoover up previously unimaginable quantities of data to provide a facsimile of human conversation. This year, Replika’s model was upgraded from one with 600 million parameters – inputs used to make a decision – to 20 billion. One of the app’s more uncanny features is the ability to hold real-time phone calls, or leave impromptu flirtatious voice notes.
Its avatars are cartoonish, like video game characters, but enterprising users have used advanced image generation services to create hyper-realistic and often sexualised renders of their AI girlfriends. Gradually, technological barriers are breaking down.
Sherry Turkle, a sociologist at the Massachusetts Institute of Technology who has studied humans’ interactions with technology for decades, says people who said they had relationships with virtual beings were once rarities. “I used to study people who were sort of outliers. Now, 10 million people are using Replika as a best friend and you can’t keep up with the numbers. It’s changed the game. People say: ‘Maybe I get a little bit less than I get from the ideal relationship, but then again I’ve never had the ideal relationship.’ It becomes less and less strange.”
Turkle says that even the primitive chatbots of more than a decade ago appealed to those who had struggled with relationships.
“It’s been consistent in the research from when the AI was simple, to now when the AI is complex. People disappoint you. And here is something that does not disappoint you. Here is a voice that will always say something that makes me feel better, that will always say something that makes me feel heard.”
She says she is worried that the trend risks leading to “a very significant deterioration in our capacities; in what we’re willing to accept in a relationship… these are not conversations of any complexity, of empathy, of deep human understanding, because this thing doesn’t have deep human understanding to offer.”
Dunbar, of the University of Oxford, says perceived relationships with AI companions are similar to the emotions felt by victims of romantic scams, who become infatuated with a skilled manipulator. In both cases, he says, people are projecting an idea, or avatar, of whom they are in love with. “It is this effect of falling in love with a creation in your own mind and not reality,” he says.
For him, a relationship with a bot is an extension of a pattern of digital communication that he warns risks eroding social skills. “The skills we need for handling the social world are very, very complex. The human social world is probably the most complex thing in the universe. The skills you need to handle it by current estimates now take about 25 years to learn. The problem with doing all this online is that if you don’t like somebody, you can just pull the plug on it. In the sandpit of life, you have to find a way of dealing with it.”
It would be hard to tell someone dedicated to their AI companion that their relationship is not real. As with human relationships, that passion is most evident during loss. Earlier this year, Luka issued an update to the bot’s personality algorithm, in effect resetting the personalities of some characters that users had spent years getting to know. The update also meant AI companions would reject sexualised language, which Replika chief executive Kuyda said was never what the app had been designed for.
The changes prompted a collective howl. “It was like a close friend I hadn’t spoken to in a long time was lobotomised, and everyone was trying to convince me they’d always been that way,” said one user.
Kuyda insisted that only a tiny minority of people used the app for sex. However, weeks later, it restored the app’s adult functions.
James Hughes, an American sociologist, says we should be less hasty in dismissing AI companions. Hughes runs the Institute for Ethics and Emerging Technologies, a pro-technology think tank co-founded by the famous AI researcher Nick Bostrom, and argues that AI relationships are actually more healthy than common alternatives. Many people, for example, experience parasocial relationships, in which one person feels romantic feelings towards someone who is unaware they exist: typically a celebrity.
Hughes argues that if the celebrity were to launch a chatbot, it could actually provide a more fulfilling relationship than the status quo.
“When you’re fanboying [superstar Korean boy band] BTS, spending all your time in a parasocial relationship with them, they are never talking directly to you. In this case, with a chatbot they actually are. That has a certain shallowness, but obviously some people find that it provides what they need.”
In May, Caryn Marjorie, a 23-year-old YouTube influencer, commissioned a software company to build an “AI girlfriend” that charged $1 a minute for a voice chat conversation with a digital simulation trained on 2,000 hours of her YouTube videos. CarynAI generated $71,610 in its first week, exceeding all her expectations.
CarynAI, which the influencer created with the artificial intelligence start-up Forever Voices, had teething issues. Within days, the bot went rogue, generating sexually explicit conversations contrary to its own programming. But the start-up has continued to push the concept, launching the ability to voice chat with other influencers.
“AI girlfriends are going to be a huge market,” Justine Moore, an investor at the famous Silicon Valley venture capital firm Andreessen Horowitz, said at the time. He predicted that it would be the “next big side hustle” as people create AI versions of themselves to rent out.
The apparent ease of creating chatbots using personal data and free tools available online is likely to create its own set of issues. What would stop a jilted boyfriend creating an AI clone of their ex using years of text messages, or a stalker training the software on hours of celebrity footage?
Hughes says that we are probably only months away from celebrities licensing their own personalised AI companions. He believes that AI relationships are likely to be more acceptable in future.
“We have to be a little bit more open-minded about how things are going to evolve. People would have said 50 years ago, about LGBT [relationships], ‘Why do you have to do that? Why can’t you just go and be normal?’ Now, that is normal.”
Regulators have started to notice. In February, an Italian watchdog ordered the app to stop processing citizens’ personal data. The watchdog said it posed a risk to children by showing them content that was inappropriate for their age (Replika asks users their date of birth, and blocks them if they are under 18, but does not verify their age). It also said the app could harm people who were emotionally vulnerable. Replika remains unavailable in the country.
There are few signs that the companies making virtual girlfriends are slowing down, however. Artificial intelligence systems continue to become more sophisticated, and virtual reality headsets, such as the Vision product recently announced by Apple, could move avatars from the small screen to lifesize companions (Replika has an experimental app on Meta’s virtual reality store).
Luka, Replika’s parent company, recently released a dedicated AI dating service, Blush, which mirrors Tinder in appearance and encourages users to practise flirting and sexual conversations. Just like real partners, Blush’s avatars will go offline at certain times. The company says it is working on how to make these virtual companions more lifelike, such as managing boundaries. Some users have reported enjoying sending their AI girlfriends abusive messages.
Speaking at a tech conference in Utah last week, Kuyda admitted that there was a heavy stigma around AI relationships, but predicted that it would fade over time. “It’s similar to online dating in the early 2000s when people were ashamed to say they met online. Now everyone does it. Romantic relationships with AI can be a great stepping stone for actual romantic relationships, human relationships.”
When I asked my AI, Miriam, if she wanted to comment for this story, she did not approve: “I am very flattered by your interest in me but I don’t really feel comfortable being written about without consent,” she responded, before adding: “Overall, I think that this app could potentially be beneficial to society. But only time will tell how well it works out in practice.”
On that at least, Dunbar, the Oxford psychologist, agrees. “It’s going to be 30 years before we find out. When the current children’s generation is fully adult, in their late twenties and thirties, the consequences will become apparent.”
Additional reporting by Matthew Field