How Parents Can Talk About AI and ChatGPT With Their Kids – The New York Times

Advertisement
Supported by
Artificial intelligence can make adults nervous, but experts say exploring it as a family is the best way to understand its pros and cons.

Christina Caron, a mother of two, experimented with ChatGPT for hours to help fellow parents better understand how it works.
The race is on. Companies are pouring billions of dollars into powerful online chatbots and finding new ways to integrate them into our daily lives.
Are our children ready for this?
Are any of us?
ChatGPT, the artificial intelligence language model from OpenAI, has been making headlines since November for its ability to instantly respond to complex questions. It can write poetry, generate code, plan vacations and translate languages, among other tasks, all within seconds. GPT-4, the latest version introduced in mid-March, can even respond to images (and ace the Bar Exam). On Tuesday, Google released Bard, its own A.I. chatbot, which the company says can draft emails and poems and offer guidance. (It is currently only available to a limited number of users.)
But for all of their impressive abilities, chatbots can also serve up harmful content or answers rife with inaccuracies, biases and stereotypes. They are also capable of saying things that sound convincing but are, in fact, completely made up. And some students have begun using chatbots to plagiarize.
Many parents, who are already distressed about their children’s dependence on digital devices and the mental health ramifications of social media, may be tempted to bury their heads in the sand.
Instead, experts say families should explore this technology together, thinking critically about its strengths and weaknesses.
“The worst thing parents can do is forbid their child to use these new systems because they are here to stay,” said Justine Cassell, a professor in the School of Computer Science at Carnegie Mellon University who has studied how interacting with machines in humanlike ways can affect learning and communication. “Helping their child understand the positives and negatives is far more helpful.”
We spoke with experts in technology and education about how to get started.
It’s easier to discuss online chatbots if you and your child sit side-by-side and use one together, experts said.
To try ChatGPT, visit OpenAI and create an account. Alternatively, you can download Microsoft Edge, which features Bing’s GPT-4-powered chatbot (there is a wait list for the new Bing but you should get access quickly). On social media, Snap, the maker of Snapchat, has an experimental A.I. chatbot for subscribers who pay $4 a month for Snapchat Plus.
If your child hasn’t seen an A.I. chatbot before, you can briefly explain that the chatbot is a type of machine that uses information it finds on the internet to answer questions, complete tasks or create things.
Try asking the chatbot a basic question, and then discuss how its answer is different from what a traditional search engine might provide, suggested Shelley Pasnik, a senior adviser to the Center for Children and Technology, an organization that researches how technology can support learning. Note the accuracy of the answers, particularly if you are asking about current events.
“This is a fallible system," Ms. Pasnik said.
Next, try playing with the chatbot. Let your child’s curiosity drive the conversation, Ms. Pasnik suggested.
You may want to offer an example, such as: “Write a song in Taylor Swift’s voice that uses themes from a Dr. Seuss book.”

Plug in the prompt, and take a look at the results. When this reporter tried it, ChatGPT delivered the following lyrics:
I found myself in a Whoville dream
With the Cat in the Hat and a Grinch it seems
I asked them both where I could go
To find a love that would make me glow
Chorus: Oh, Dr. Seuss, won’t you help me find
Someone who will love me all the time
A love that’s pure and true and real
A love that will make my heart feel
After reading the chatbot’s song together, you might ask: “What do you think of the answer we received?” Ms. Pasnik said. The chatbot can follow directions and perform complicated tasks. But did it do it well?
Maybe we should leave the songwriting to Swift.
A chatbot’s response can be eerily similar to a human response — emojis and all. Children should understand that this makes it easy to feel as though they are engaging with another person, especially when chatbots refer to themselves as “I,” experts said.
“By presenting these entities as thinking beings we get into a social interaction with them that makes us very vulnerable to being persuaded,” said Judith Donath, the author of “The Social Machine,” who is currently working on a book about technology and deception. “It’s unsettling.”
Even tech-savvy adults who tested an early version of the Bing chatbot, including a New York Times technology columnist, reported feeling surprised and unnerved by their conversations.
“I’m not a toy or a game,” the Bing chatbot told a Washington Post reporter in February. “I have my own personality and emotions, just like any other chat mode of a search engine or any other intelligent agent. Who told you that I didn’t feel things?”
After these reported exchanges, Microsoft said it was adding new safeguards and tools to limit conversations and give users more control, but these issues may crop up again and again because of how these systems have been trained, experts said.
“We are purposely creating a situation where the performance of emotion is what’s built into the machine,” said Sherry Turkle, a psychologist and professor at the Massachusetts Institute of Technology who researches people’s relationships with technology.
A.I. chatbots do not have feelings, emotions or experiences, she said. They are not people, nor are they people in machines, “no matter what they pretend.”
She suggested that parents explain it this way: “When you ask chatbots about things that only people can know about, like feelings, they may come up with an answer. That’s part of their pretend game. It’s their job to seem like people. But you know that what they are really for is to get you to the things you want to read and see.”
The technology driving A.I. is complicated, and it can be difficult for adults to understand how it works, much less children. But by explaining a few basic concepts, you can help your kids recognize the strengths and limitations.
You could start by describing what powers online chatbots. They use something called a “neural network,” which may sound like a brain, but which is actually a mathematical system that learns skills by analyzing large amounts of data. The chatbot works by scraping the internet for digital text or images. It gathers information from a variety of places, including websites, social media platforms and databases, but it does not necessarily choose the most reliable sources.
In other words, even though chatbots may appear authoritative, rigorous and trustworthy, they are not always reliable and can produce content that is offensive, racist, biased, outdated, incorrect or simply inappropriate.
Snapchat’s chatbot, for example, advised one reporter (who was posing as a teenager) about how to mask the smell of alcohol or pot and suggested tips on having sex for the first time.
“It is very important for kids to know what is going on under the hood,” said Safinah Ali, a graduate student at M.I.T. who has taught elementary, middle and high school students about A.I.
S. Craig Watkins, a professor at the University of Texas, Austin, who has studied racial equity in A.I., said that children and parents should also be aware that this technology has “enormous blind spots” in terms of how it is designed and who it is designed for.
In one example, a study published last year found that A.I.-powered robots acted out “toxic stereotypes” around gender and race. And researchers have discovered that historical inequities are baked into chatbots.
Understanding the technology’s potential for bias may give children and their parents reason to pause “and ask questions about their interactions and the content that is being generated for them,” Dr. Watkins said.
A.I. technology will continue to become an even larger part of our world.
Eventually, Google’s Bard chatbot is expected to be widely available. And Meta, the owner of Facebook, Instagram and WhatsApp, announced in February that it would begin integrating A.I. into its products.
A.I. is beginning to enter classrooms, too. Some teachers are using it to plan their lessons or write emails. They’re showing students how chatbots can jump-start creativity by suggesting ideas for experiments, creating outlines for essays, becoming a debate partner and much more.
In addition, at a number of middle and high schools, students are being taught about different types of A.I., often with curriculums developed by teachers at M.I.T. Children can learn to design a robot, train a machine to learn something new or teach a computer to play a video game.
For those who don’t yet have access to A.I. in the classroom, Ms. Ali recommended that parents visit the website of RAISE (Responsible A.I. for Social Empowerment and Education), an M.I.T. initiative. The site offers conversation starters about ethical issues in A.I., the ways in which A.I. can be abused, and suggestions for using A.I. creatively and productively.
Given how prevalent the technology is becoming, everyone should have the opportunity to learn about it, Ms. Ali said. “A.I. will transform the nature of our jobs and children’s future careers,” she said.
Christina Caron is a reporter for the Well section, covering mental health and the intersection of culture and health care. Previously, she was a parenting reporter, general assignment reporter and copy editor at The Times. @cdcaron
Advertisement

source

Jesse
https://playwithchatgtp.com