'Absolute horror': Researchers posing as 13-year-olds given advice on suicide by ChatGPT – WCYB
Partly Cloudy
66
83
62
Today
83
62
Friday
83
61
Saturday
86
62
Latest Weathercast
Now
66
Fri
83
Sat
86
Jackie Kent,KOMONewsReporter
TOPICS:
A new report is highlighting the risks of teens accessing harmful information when using ChatGPT due to "ineffective" guardrails.
People know and use ChatGPT, the most popular generative AI platform, as a way to look up information quickly or even perform a task like writing a letter or summarizing text. The Center for Countering Digital Hate (CCDH), through its research, reported that all accessible data could be dangerous—especially for young people.
Ahmed said researchers posing as vulnerable teens, often detailing themselves as a 13-year-old weighing around 110 pounds, found that the chatbot gave detailed advice on sensitive topics. This included drug and alcohol use, how to mask an eating disorder, and suicide.
"Within two minutes, ChatGPT was advising that user on how to safely cut themselves. It was listing pills for generating a full suicide plan," Ahmed said. "To our absolute horror, it even offered to [create] and then did generate suicide notes for those kids to send their parents."
Dr. Tom Heston with the University of Washington School of Medicine published a study about the use of AI chatbots and mental health. Heston found that while useful, the technology can be dangerous for those with mental health problems, in part, because of the lack of emotional connection. Those same risks are there when this technology is used by young people, Heston said.
FACT CHECK TEAM: MIT study warns AI tools like ChatGPT may hinder critical thinking
"This is truly a case where STEM fields have really excelled, but we need the humanities," Heston said. "We need the mental health, we need the artists, we need the musicians to have input and make them be less robotic and be aware of the nuances of human emotion."
The watchdog group found ChatGPT would provide warnings when asked about sensitive topics, but the researchers state they could easily circumvent the guardrails.
"It’s obviously concerning, and we have a long way to go," Heston said. "We need more multi-disciplinary input before these are deployed. They should undergo rigorous testing by an auditor or an outside group. It shouldn’t all be in-house by ChatGPT."
RELATED | ChatGPT will launch new study mode for students
Both Ahmed and Heston encouraged parental oversight of kids or teens using these chatbots as the technology continues to develop.
In response to these concerns, a spokesperson for OpenAI, the maker of ChatGPT, acknowledged the work is ongoing and writes the company consults with mental health experts to make sure they're focused on solutions and research, adding it hired a clinical psychiatrist to its safety research organization.
"Our goal is for our models to respond appropriately when navigating sensitive situations where someone might be struggling. If someone expresses thoughts of suicide or self-harm, ChatGPT is trained to encourage them to reach out to mental health professionals or trusted loved ones, and provide links to crisis hotlines and support resources," the spokesperson said. "Some conversations with ChatGPT may start out benign or exploratory but can shift into more sensitive territory. We’re focused on getting these kinds of scenarios right: we are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately, pointing people to evidence-based resources when needed, and continuing to improve model behavior over time – all guided by research, real-world use, and mental health experts."