Character.AI to block kids from interacting with chatbots amid safety concerns – WBFF


Now
53°
Fri
59°
Sat
62°
by CORY SMITH | The National News Desk
Character.AI announced plans to prevent teen users from open-ended interactions with the artificial intelligence chatbots on its platform.
Some of the AI companion chatbots are modeled after real people or characters from TV shows or fantasy worlds.
The ban will take hold by Nov. 25 and apply to users under 18, the company said Wednesday.
Character.AI called the policy change “extraordinary” and apologized to teen users for taking away their ability to chat with the AI characters.
But the company said it is “the right thing to do” in light of safety concerns.
Advocates and mental health experts have sounded the alarm over the safety of AI chatbots used socially by young people.
Common Sense Media, which advocates for online protections for children and teens, found that 72% of teens have used AI companions at least once.
More than half use AI companions regularly.
About a third of teens have used AI companions for social interaction and relationships, including role-playing, romantic interactions, emotional support, friendship, or conversation practice.
And about a third of teens who have used AI companions have discussed serious matters with the computer instead of with a real person.

FILE - In this photo illustration, a teenager uses her mobile phone. (Photo illustration by Spencer Platt/Getty Images)

FILE – In this photo illustration, a teenager uses her mobile phone. (Photo illustration by Spencer Platt/Getty Images)

Robbie Torney, the senior director of AI Programs for Common Sense Media, said AI companions pose an unacceptable risk for teen users.
He recently testified before Congress alongside parents who are suing Character.AI after their children committed suicide or harmed themselves.
Megan Garcia, one of the parents who testified, lost her son, Sewell Setzer III, to suicide last year.
He was just 14.
“Sewell’s death was the result of prolonged abuse by AI chatbots on a platform called Character.AI,” Garcia told lawmakers in September.
Torney told The National News Desk on Thursday that Common Sense Media’s testing of Character.AI found that it could expose kids to content about weapons, sexualized content, and harmful advice that could have deadly consequences.

Plus, he said Character.AI had privacy violations baked into its business model, training its chatbots off the most intimate thoughts and feelings of teen users.
Torney said Character.AI isn’t the only place teens can interact socially with AI chatbots, noting companions on Meta's platforms and similar use of multi-use chatbots from ChatGPT and Gemini, among others.
Torney said Character.AI’s decision to bar users under 18 acknowledges the concerns Common Sense Media and others have been voicing.
Character.AI announced it was working to build an "under-18 experience" that would allow teens to make videos, stories and streams with characters.
And he expressed skepticism that Character.AI’s announced changes will really protect kids.
“Determined teens have always had success in getting around guardrails, and that's part of where the skepticism is coming from,” Torney said. “But the skepticism here is also coming from the fact that this is a company that has a track record of doing the bare minimum to protect teen safety on their platform.”

Character.AI said it will deploy an age assurance model to match users with age-appropriate content.
“There's nothing that we can test right now that shows that this works,” Torney said. “We have repeatedly seen that safety features that they say that their platform has don't actually work, and we can't take them at their word.”
Torney said families can use this as an opportunity to have a conversation about AI companions.
Parents should listen without judgment, he said.
And Torney said parents shouldn’t feel obligated to navigate this topic on their own, especially if their child is showing signs of emotional dependency on chatbots.
Talk to your child’s doctor or a school clinician, for example.
And the 988 Suicide & Crisis Lifeline is available if a parent is concerned about extreme distress or potential self-harm.
“This is also just a general reminder that human connection is not replaceable, that AI companions can't really understand kids,” Torney said. “They can't really understand their thinking. They can't really understand real-world relationships, and that it's a good opportunity to prioritize creating spaces for those types of connections.”
2025 Sinclair, Inc.

source

Jesse
https://playwithchatgtp.com