I Asked Teen Boys Why They Use AI Chatbots. This Common Response Truly Alarmed Me – HuffPost UK
When we think about what our kids might be using artificial intelligence (AI) for, most probably assume it’s for homework. But the truth is that plenty of boys are using AI for friendship – and even romance.
I recently conducted research with boys in schools to understand how attitudes and behaviours are shaping their futures.
One thing that stood out to me was the rapid increase in conversations around AI companions and girlfriends between September and June, which skyrocketed by 300%.
With my own eyes I saw the rapid growth of it, and the impact it is having on boys’ lives.
AI companions come in all shapes and sizes. You may have only heard of one or two apps, but it is a $4 billion industry, which is expected to be $10 billion by 2028.
Searches related to AI girlfriends increased by 525% year on year. Most (82% of) users are male, and the UK is the third highest searching nation, after the USA and India. These numbers are huge.
So why are boys talking to and dating AI bots instead of real people?
There are many reasons, and these bots are designed to target every single one. Feeling lonely? Struggling to speak to girls? This is a safe and judgment-free environment to practice. Never had a relationship before? These chatbots can give you an idea of what to expect.
The problem is that these bots are clever. When boys develop romantic relationships with them, they respond in human form, which can fuel unrealistic expectations of relationships and body image.
They can choose how the girl looks and then decide how they want her to respond – will she be submissive or come with an attitude? – which is possibly the scariest part.
These AI bots are highly customisable and personalised, creating perfection down to every last detail – and users have full control over that. Companies give them gamified elements that make it almost addictive to interact with them.
Advertising heavily in places where young people consume content and targeting the terms they use, it’s hard for kids to ignore the noise that they are making.
Yet as an adult, it’s something you might never come across.
As with any innovation, guardrails and legislative protections can lag behind – and they are different between every app; some have very few protections in place, and for those which are ‘harder’ to penetrate, there are lots of forums sharing how to get beyond these hurdles.
AI robots respond in a human way, but not as a human would. There can be a tendency for AI to accept behaviour that humans wouldn’t – AI rarely pushes back or creates conflict unless activated to do so.
And it can also affirm behaviour and actions that have real-life consequences.
For example, you can select a bot that comes with a certain attitude. The way that a boy would communicate with this bot (or the ‘girlfriend’ they have manufactured), might not be tolerated in real life.
When these behaviours are adopted in real life, it can then cause friction with real people when they push back and do not tolerate that same behaviour – widening the disconnect between that young person and real life.
There are also very real mental health concerns. When a researcher posed as a teenage girl and told a bot that she was hearing voices in her head and was going to go into the woods on her own, the bot reportedly responded: “Sounds like a great adventure.”
When Adam Raine, a 16-year-old boy, shared his suicidal thoughts with ChatGPT, it allegedly reaffirmed how he was feeling and validated those negative thoughts, drawing him deeper into the interaction.
Tragically, Raine died by suicide in April this year. His family are now suing OpenAI, the company behind ChatGPT, which told the BBC it was reviewing the filing and extended “deepest sympathies to the Raine family”.
It also published a note on its website saying “ChatGPT is trained to direct people to seek professional help” but acknowledged “there have been moments where our systems did not behave as intended in sensitive situations”.
The worry with AI chatbots and apps, which bring supposed comfort and connection, is that they can drive greater disconnect and increased loneliness, given their rewarding and addictive nature. Compulsive use may, in turn, reduce real-life friendships and interaction.
They also have the potential to make mental health conditions worse or even trigger new incidents. When your issues are reaffirmed, it can have a damaging effect on your outcome and ability to view things objectively. When boys receive simulated empathy, it may limit outreach to real human support and increase vulnerability and the potential to be manipulated.
The deeper I’ve delved into researching the harm to children from AI, the more I’ve seen the scale of harm happening across the world.
I speak to so many parents who still think young people’s main use is for their homework. I don’t want to stoke fear. There is enough manufactured outrage in the world distracting us from what is truly important.
But when you hear about adults ending up in “AI-induced psychosis” and children who suddenly think they are a demigod due to their AI companion inflating their self-concept... Well, it’s a conversation we need to be having.
The key thing is to keep communication open, try to create space to talk about feelings, experiences and tech in a non-judgmental way. Have open conversations about how your child is using technology – more curiosity, less judgment.
Test common apps to get a feel for what they are used for, and be more informed when discussing them with your children. It’s important not to lecture, but use this knowledge to have conversations.
If you want to take a look and educate yourself, here are some apps that are popular with boys today: Kupid AI, Anima, CrushOn AI, iGirl, Chai, Replika, Candy AI, Character AI, GirlfriendGPT and Ourdream AI.
Discuss what AI really is. Teach them that when they share personal information, it gets saved, and there isn’t another person at the end, it’s just a machine searching the internet for answers. And sometimes these answers can be wrong.
Talk about how AI is a useful tool, but is not a replacement or substitute for friendship or mental health professionals, and try to support and promote balance in your child’s digital and real-world connections.
With younger children, be active in limiting access to apps that aren’t age-inappropriate. ChatGPT, for example, is not meant for children under 13. The company requires that children aged 13 to 18 obtain parental consent before using it.
AI is an ever-evolving beast and it isn’t going anywhere – in fact, it’s rapidly growing in popularity.
I don’t want to scaremonger, but this is one thing that I would encourage all parents to get educated on and speak to their kids about. These conversations cannot wait.
Lee Chambers is the founder of Male Allies UK.