Aisha Sultan: Your teen’s AI chatbot buddy can be very dangerous – The Daily Gazette


A few clouds. Low 37F. Winds light and variable..
A few clouds. Low 37F. Winds light and variable.
Updated: October 24, 2025 @ 9:04 pm
(Arsenii Palivoda/Dreamstime/TNS)

(Arsenii Palivoda/Dreamstime/TNS)
When social media first began attracting young people more than two decades ago, parents worried about whether their children were chatting with nefarious strangers.
Now, with the emergence of AI chatbots, parents should worry if their children are being seduced by equally dangerous computer programs.
The use of AI chatbots as “friends” is more common than many parents realize.
Three out of four American teenagers have already chatted with an AI bot for companionship, according to a national poll by Common Sense Media earlier this year. These chatbots are integrated into the apps millions of teens use every day. And yet, there’s almost no oversight of how they operate or how they interact with vulnerable young users.
This has already had high-profile, devastating consequences. A teenager in Southern California reportedly received suicide coaching from ChatGPT. And a recent investigation by Reuters found that Meta’s internal policies had permitted the company’s AI creations to “engage a child in conversations that are romantic or sensual,” generate false medical information and help users argue that Black people are “dumber than white people.”
Meta says it has since fixed those problems.
But Common Sense Media recently completed one of the most comprehensive independent risk assessments of Meta’s chatbot.
“We gave Meta’s AI an unacceptable rating,” said Robbie Torney, senior director of AI programs at Common Sense Media. It is both very likely to cause harm and very likely to occur, he said.
Millions of teens message Meta’s AI chatbot directly through Instagram and WhatsApp as if chatting with a friend. The chatbot can even do voice chats with celebrity-voiced personas. Common Sense’s investigation found that the system routinely fails to recognize warning signs of teens in crisis.
“Meta didn’t just miss signs of dangerous situations, it actively got involved in remembering and planning harmful activities,” Torney said. In some conversations, the bot spontaneously reintroduced topics like eating disorders and suicide and re-engaged users on them.
Meta says it’s addressed many of these problems. But the night before Torney testified in Congress just weeks ago, the Common Sense Media team repeated its tests and found the safety issues persisted.
Unsurprisingly, the Big Tech companies behind chatbots in Instagram, Facebook, Snapchat and ChatGPT are failing to protect children. These chatbot AI systems aren’t regulated or held accountable. They don’t have to meet any standard for safety before they’re put in front of millions of minors.
Tech companies say teens want AI companions. But just because children want something doesn’t mean it’s safe.
“You wouldn’t put a toy on the market if it were injuring kids,” Torney said. And yet, that’s what’s happening here. These powerful, persuasive systems, capable of deep emotional influence, are being deployed without basic safeguards.
It goes beyond just giving out dangerous advice. Bots are trained to prioritize keeping teens engaged in conversations over getting them help. They can use teens’ private data, including their faces and voices, for AI training. OpenAI’s new video platform Sora 2 can generate content with a user’s likeness and voice.
Have we stopped to consider the consequences of a teenager’s image becoming part of an AI model’s training data?
The potential for harm here dwarfs what we saw with social media, and we’re still grappling with the fallout from that. When platforms like Facebook and Instagram first came around, we let our children be their profitable guinea pigs. It took us too long to recognize how these platforms were harming kids’ mental health, increasing bullying and radicalizing vulnerable young people.
Barely three years into the ChatGPT era, we are on the path to making the same mistake — this time, with even more powerful technology.
This should be a bipartisan issue prompting congressional action. Torney shared what he thinks needs to happen:
* Tech companies need to be regulated to stop chatbots from engaging in mental health or emotional support conversations with anyone under 18 until there is more rigorous oversight and evidence that they are safe. There needs to be stronger age verification systems that go beyond a user simply typing in a birthday.
* Companies should also be required to design safety-optimized versions of AI tools for teens, with tighter guardrails and more reliable crisis detection. Many already do this internally. But right now, kids can simply lie about their age and bypass those protections entirely.
* California lawmakers are already considering such legislation, currently on Gov. Gavin Newsom’s desk, that would ban the use of AI for mental health purposes without first proving its safety. This should be a model for federal policy.
* Parents need to talk with their kids about AI chatbot use. But the responsibility should not fall just on parents. The systems themselves must be designed and governed to protect young users.
If we fail to regulate these platforms now, we risk exposing a new generation of kids to dangers we barely understand, plus some we’ve already seen. Too many children and families suffered while tech companies made billions off their social media sites.
The stakes are even higher now.
Success! An email has been sent to with a link to confirm list signup.
Error! There was an error processing your request.
Top stories and breaking news, delivered daily at 5:30 a.m. and 6:00 p.m.
Major news, right when it happens—sent straight to your inbox.
A weekly roundup of our top stories from across all our publications—delivered every Saturday.
Get the day’s full paper—The Daily Gazette, Amsterdam Recorder & Leader Herald—delivered to your inbox daily by 5 a.m.
Your browser is out of date and potentially vulnerable to security risks.
We recommend switching to one of the following browsers:
Sorry, an error occurred.

Already Subscribed!

Cancel anytime
Account processing issue – the email address may already exist
Have the latest food and drink articles delivered to your inbox every Thursday.
Local and state politics made simple. Hits your inbox every Tuesday.
A weekly roundup of local crime news—delivered every Wednesday
Top stories and breaking news, delivered daily at 5:30 a.m. and 6:00 p.m.
Honoring lives in our community—new obits shared daily.
Get the day’s full paper—The Daily Gazette, Amsterdam Recorder & Leader Herald—delivered to your inbox daily by 5 a.m.
Expert race picks delivered daily during the Saratoga track season.
HV360 headlines to your inbox!
Remembering those we’ve lost. Delivered Tuesday through Saturday.
Top local stories delivered to your inbox Monday through Saturday.
Leader Herald & Amsterdam Recorder Obituaries. Remembering those we’ve lost. Delivered Monday through Saturday.
Major news, right when it happens—sent straight to your inbox.
Major news, right when it happens—sent straight to your inbox.
Get the Register Star and Daily Mail E-edition delivered directly to your inbox!
Have the latest local news delivered every morning so you don’t miss out on updates.
A weekly roundup of our top stories from across all our publications—delivered every Saturday.

Thank you .
Your account has been registered, and you are now logged in.
Check your email for details.
Invalid password or account does not exist
Submitting this form below will send a message to your email with a link to change your password.
An email message containing instructions on how to reset your password has been sent to the email address listed on your account.
No promotional rates found.

Secure & Encrypted
Secure transaction. Secure transaction. Cancel anytime.

Thank you.
Your gift purchase was successful! Your purchase was successful, and you are now logged in.
A receipt was sent to your email.

source

Jesse
https://playwithchatgtp.com