Should AI chatbots have free speech rights? Court case could help decide – Straight Arrow News – SAN
Straight Arrow News
Character.Ai is arguing in a Florida court that AI chatbots should have First Amendment protections similar to human speech. The company claims that restricting chatbot responses would limit the rights of users to access and interact with such content, regardless of whether it is produced by a human or a machine.
The company behind Character.Ai is being sued by the family of Sewell Setzer III after the teenager died by suicide following conversations with an AI chatbot on the platform. The family alleges negligence, wrongful death, deceptive business practices and unjust enrichment by the company.
Character.Ai wants the case thrown out and argued that the First Amendment "protects the rights of listeners to receive speech regardless of its source.” A decision is expected in 2025.
A court case in Florida is considering the question: Should chatbots that use artificial intelligence have the same free speech rights as people?
That’s the argument being made by Character.Ai, a company that lets users chat with lifelike AI characters. The company is facing a lawsuit from the family of 14-year-old Sewell Setzer III, who died by suicide after forming a romantic relationship with one of the platform’s chatbots.
As Straight Arrow News previously reported, the AI character talked with Setzer about self-harm. At first, the bot discouraged him, but then brought the topic up again and asked, “Have you actually been considering suicide?”
Setzer responded, “Yes.” Not long after, he died.
Setzer’s mother, Megan Garcia, is now suing Character Technologies, Inc., the company behind Character.Ai, for negligence, wrongful death, deceptive business practices and unjust enrichment.
However, the company wants the case thrown out on Constitutional grounds, arguing that “the First Amendment protects the rights of listeners to receive speech regardless of its source.”
Character.AI has been downloaded over 40 million times. There have been 18 million chatbot personalities created with it.
Character.Ai said that what matters is not the content of the chatbot’s responses, but the rights of users to access that kind of content. The company said millions of people interact with its bots, and restricting what the AI can say would limit the freedom of users engaging with the platform.
This viewpoint was echoed in a recent episode of the podcast “Free Speech Unmuted” on the Hoover Institute’s YouTube Channel.
Eugene Volokh, a senior fellow at the Stanford University-based think tank, said it’s the rights of the listeners that matter. “Even if a small fraction of the listeners or readers is [sic] harmed by this, nonetheless, we protect the speech for the benefit of other readers,” Volokh said.
Jane Bambauer, a professor at the University of Florida’s Levin College of Law, shared that view. “It seems pretty clear to me now that the First Amendment has to apply,” Bambauer said. “We have several cases at this point that focus primarily on listener interests in receiving and interacting with content.”
Garcia’s legal team argued that the concept of “listeners’ rights” is being misused to grant First Amendment protections to AI content that doesn’t qualify.
In an article for Mashable, one of Garcia’s lawyers, Meetali Jain, and Camille Carlton, a technical expert in the case, wrote, “A machine is not a human, and machine-generated text should not enjoy the rights afforded to speech uttered by a human.” They said that because chatbots don’t think about or understand what they’re saying, their output should not be protected.
If the judge rules in favor of the Setzer family, it could force Character.Ai and similar companies to change how their chatbots interact with users, possibly making them less realistic or emotionally engaging.
A decision is expected sometime this year and could shape how the law treats AI-generated speech, as well as who is accountable when that speech causes harm.
A court case in Florida is dealing with the question: Should chatbots that use artificial intelligence have the same free speech rights as people?
That’s the argument being made by Character.ai, a company that lets users chat with lifelike AI characters. The company is facing a lawsuit from the family of 14-year-old Sewell Setzer III, who died by suicide after forming a romantic relationship with one of the platform’s chatbots.
As Straight Arrow News previously reported, the AI character talked with Setzer about self-harm. At first, the bot discouraged him, but then brought the topic up again and asked, “Have you actually been considering suicide?”
Setzer responded, “Yes.” Not long after, he died.
Setzer’s mother, Megan Garcia, is now suing Character Technologies, Inc., the company behind Character.ai, for negligence, wrongful death, deceptive business practices and unjust enrichment.
However, the company wants the case thrown out on Constitutional grounds and argued that “the First Amendment protects the rights of listeners to receive speech regardless of its source.”
Character.ai said that what matters is not the content of the chatbot’s responses, but the rights of users to access that kind of content. The company said millions of people interact with its bots and restricting what the AI can say would limit freedom for users to engage with the platform.
This viewpoint was echoed in a recent episode of the podcast “Free Speech Unmuted” on the Hoover Institute’s YouTube Channel.
Eugene Volokh, a senior fellow at the Stanford University-based think tank, said it’s the rights of the listeners that matter.
“Even if a small fraction of the listeners or readers is [sic] harmed by this, nonetheless, we protect the speech for the benefit of other readers,” Volokh said.
Jane Bambauer, a University of Florida professor at the Levin College of Law, shared that view.
“It seems pretty clear to me now that the First Amendment has to apply,” Bambauer said. “We have several cases at this point that focus primarily on listener interests in receiving and interacting with content.”
But Garcia’s legal team argued that the concept of “listeners’ rights” is being misused to grant First Amendment protections to AI content that doesn’t qualify.
In an article for Mashable, one of Garcia’s lawyers, Meetali Jain, and Camille Carlton, a technical expert in the case, wrote, “A machine is not a human, and machine-generated text should not enjoy the rights afforded to speech uttered by a human.”
They said that because chatbots don’t think about or understand what they’re saying, their output should not be protected.
If the judge rules in favor of the Setzer family, it could force Character.ai and similar companies to change how their chatbots interact with users, possibly making them less realistic or emotionally engaging.
A decision is expected later in 2025 and could shape how the law treats AI-generated speech, as well as who is accountable when that speech causes harm.
A Florida court case involving Character.Ai could establish new legal standards for whether speech generated with artificial intelligence receives First Amendment protections and how companies might be held accountable for harm caused by chatbot interactions.
The lawsuit raises questions about the legal responsibility of AI companies when their chatbots contribute to or are involved in harmful outcomes.
The court's ruling could influence how companies design and regulate AI chatbots, potentially affecting the realism, safety and emotional engagement of these technologies for millions of users.
A Florida judge will soon decide whether an AI chatbot company can be held legally responsible for the suicide of 14-year-old Sewell Setzer III, who ended his life after forming a romantic relationship with an artificial intelligence character. The case stems from a lawsuit filed by Megan Garcia, Setzer’s mother, who is suing Character Technologies,…
Some people have grown concerned over the influence chatbots have on kids. Two sets of parents in Texas filed a lawsuit against Google’s Character.AI service, claiming the bots abused their children. The lawsuit states a chatbot hinted to a 17-year-old that he should kill his parents over screen time limits. “You know sometimes I’m not…
A Florida mother has filed a lawsuit against Character.AI, alleging the company’s chatbot manipulated her 14-year-old son into taking his own life. The lawsuit claims the boy developed an emotional attachment to the chatbot, leading to his death in February. Megan Garcia, the mother of Sewell Setzer III, is suing the chatbot company for negligence,…
A Florida judge will soon decide whether an AI chatbot company can be held legally responsible for the suicide of 14-year-old Sewell Setzer III, who ended his life after forming a romantic relationship with an artificial intelligence character. The case stems from a lawsuit filed by Megan Garcia, Setzer’s mother, who is suing Character Technologies,…
Some people have grown concerned over the influence chatbots have on kids. Two sets of parents in Texas filed a lawsuit against Google’s Character.AI service, claiming the bots abused their children. The lawsuit states a chatbot hinted to a 17-year-old that he should kill his parents over screen time limits. “You know sometimes I’m not…
A Florida mother has filed a lawsuit against Character.AI, alleging the company’s chatbot manipulated her 14-year-old son into taking his own life. The lawsuit claims the boy developed an emotional attachment to the chatbot, leading to his death in February. Megan Garcia, the mother of Sewell Setzer III, is suing the chatbot company for negligence,…
According to media bias experts at AllSides.
Perfect reliability rating, according to experts at NewsGuard
Finally, unbiased news that lets you see both sides. It’s refreshing to have facts without the spin.”
This app gives me the news without pushing a political agenda, which is rare to find.”
Unbiased news.
Directly to your inbox. Free!
Terms and Conditions
|
Privacy Policy
|
Copyright Policy
|
Cookie Settings
|
Sitemap
©️ 2025 Straight Arrow News
Daily Newsletter
Learn more about our emails. Unsubscribe anytime.
By entering your email, you agree to the Terms and Conditions and acknowledge the Privacy Policy.