Letting your kids use AI chatbots? These are the risks, according to experts – BNN Bloomberg


Published
As artificial intelligence (AI) tools like chatbots grow in popularity, experts say it’s crucial for children and youth who use them to be properly educated about what they are – and what they aren’t.
“These chatbots have no idea about the human condition or what it means to be a child,” Kerstin Dautenhahn, professor in the Department of Electrical and Computer Engineering at the University of Waterloo, told CTVNews.ca in an interview on Monday.
While most adults are able to differentiate between a real person and a chatbot merely simulating one, children may not fully understand the difference and are less likely to the younger they are, she said.
“When I, for example, interact with a with a chatbot, I’m not very concerned about myself, because I have a lot of life experiences in the real world, talking to real people with real relationships. Children are just developing that,” Dautenhahn said.
“Not only that, they are still developing a sense of who they are, what their place is in the world and in the social world, they still need to learn what friendships are.”
Popular AI chatbots like OpenAI’s ChatGPT were designed to generate human-like written responses to prompts and questions and have the ability to perform tasks such as summarizing information, writing short- and long-form content, and simulating a real conversation.
While they can be a helpful tool for simple tasks, some users have reported developing an unhealthy attachment to chatbots after using them for extended periods of time. Experts have also raised concerns about how chatbots respond to people who raise mental health concerns.
OpenAI is currently being sued by the parents of U.S. teenager Adam Raine, who died by suicide in April at the age of 16 after months of conversations with ChatGPT. The lawsuit alleges the chatbot gave Raine “self-harm and suicide encouragement,” The Associated Press reported.
It prompted the company to pledge new safety measures in order to protect young users, however the guardrails implemented by AI companies for their chatbots are “always incomplete,” says the director of education for Canadian digital media literacy organization Media Smarts.
“There is some evidence that a chatbot that is designed and supervised by mental health professionals can have positive effects, but most kids are not using those professionally designed and supervised chatbots,” Matthew Johnson told CTVNews.ca in an interview on Thursday.
“They’re using general purpose chatbots that really were not designed for this in any way. Again, they may have some guardrails, but the guardrails are only for the most extreme content.”
Like many of the social media applications popular with young people, chatbots are often designed to keep users engaged for as long as possible, Johnson noted, which makes it harder for young people to step away.
“There was recent research that found that chatbots often will use emotionally manipulative language to try to convince users not to end a session,” he said.
“It’s the impact on what kids think human relationships should be, and it’s the habit-forming elements, particularly the possibility of those coming at the expense of real relationships, that are not as extreme as some of the other risks, but on the broader scale are significantly more serious.”
Despite the potential developmental risks chatbots pose to young people, AI technology isn’t going anywhere, Dautenhahn said, so preventing children from using them altogether may not be an effective way of avoiding those risks.
“In my experience, forbidding children to do something or to eat something or to watch something doesn’t work. It makes it even more attractive, and this technology’s not going away,” she said.
“In my personal opinion, the best parents can do is to sit with their child and explain it to them. Clearly, you have to explain differently to a three-year-old than to a nine-year-old, but you can if you use the right words. It’s something, to begin with, it’s not someone.”
Dautenhahn said that it’s important for children to understand as soon as possible that although a chatbot can generate speech in a manner often undisguisable from a human, it doesn’t have the common set of values real people share.
“We typically know what is right, what is wrong, what we should say in a certain situation, and what we should not say in a certain situation, but these systems have absolutely no idea about that. They have no grounding; they are not a person,” she said.
Johnson said one of the best ways for parents to demonstrate that fact to their children is to show them “how easy it is to get a chatbot to agree to absolutely anything or to say completely absurd things.”
He said after a character-building chatbot told him and his kids that the ancient Greek philosopher Plato wrestled in his youth with the Roman mythical character Hercules, it helped them to understood that chatbots weren’t actually intelligent.
“It immediately became clear to them, just from that one interaction, that these tools don’t really know anything at all,” he said.
With files from The Associated Press
©2025 BellMedia All Rights Reserved

source

Jesse
https://playwithchatgtp.com