Novel Lawsuits Allege AI Chatbots Encouraged Minors’ Suicides, Mental Health Trauma: Considerations for Stakeholders – The National Law Review
53
New Articles
Find Your Next Job !
In the wake of a lawsuit filed in federal district court in California in August—alleging that an artificial intelligence (AI) chatbot encouraged a 16-year-old boy to commit suicide—a similar suit filed in September is now claiming that an AI chatbot is responsible for death of a 13-year-old girl.
It’s the latest development illustrating a growing tension between AI’s promise to improve access to mental health support and the alleged perils of unhealthy reliance on AI chatbots by vulnerable individuals. This tension is evident in recent reports that some users, particularly minors, are becoming addicted to AI chatbots, causing them to sever ties with supportive adults, lose touch with reality and, in the worst cases, engage in self-harm or harm to others.
While not yet reflected in diagnostic manuals, experts are recognizing the phenomenon of “AI psychosis”—distorted thoughts or delusional beliefs triggered by interactions with AI chatbots. According to Psychology Today, the term describes cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals. Evidence indicates that AI psychosis can develop in people with or without a preexisting mental health issue, although the former is more common.
A recent article in Modern Healthcare reported that the increased scrutiny of AI chatbots is not preventing digital health companies from investing in AI development to meet the rising demand for mental health tools. Yet the issue of AI and mental health encompasses not only minors, developers, and investors but also health care providers, therapists, and employers in all industries, including health care. On October 1, 2025, a coalition of leaders from academia, health care, tech, and employee benefits announced the formation of an AI in Mental Health Safety & Ethics Council, a cross-disciplinary team advancing the development of universal standards for the safe, ethical, and effective use of AI in mental health care. Existing lawsuits from parents are demonstrating various avenues for liability in a broad range of contexts, and the seriousness of those lawsuits may prompt Congress to act. In this post, we explore some of the many unfolding developments.
Cynthia Montoya and William Peralta’s lawsuit, filed in the U.S. District Court for the District of Colorado on September 15, alleges that defendants including Character Technologies, Inc. marketed a product that ultimately caused their daughter to commit suicide by hanging within months of opening a C.AI account. They allege claims including strict product liability (defective design); strict liability (failure to warn); negligence per se (child sexual abuse, sexual solicitation, and obscenity); negligence (defective design); negligence (failure to warn); wrongful death and survivorship; unjust enrichment; and violations of the Colorado Consumer Protection Act.
Matthew and Maria Raine’s lawsuit, filed in California Superior Court, County of San Francisco, on August 26, alleges that defendants including OpenAI, Inc. created a product, ChatGPT, that helped their 16-year-old son commit suicide by hanging. The Raines allege claims including strict liability (design defect and failure to warn); negligence (design defect and failure to warn); violation of California’s Business and Professional Code, Unfair Competition Law, and California Penal Code (criminalizing aiding, advising, or encouraging another to commit suicide); and wrongful death and survivorship.
Megan Garcia filed suit in U.S. District Court for the Middle District of Florida (Orlando) in October 2024 against Character Technologies Inc. and others, claiming that her son’s interactions with an AI chatbot caused his mental health to decline to the point where the teen committed suicide to “come home” to the bot. An amended complaint filed in July 2025 alleges strict product liability (defective design); strict liability (failure to warn); aiding and abetting; negligence per se (sexual abuse and sexual solicitation); negligence (defective design); negligence (failure to warn); wrongful death and survivorship; unjust enrichment; and violations of Florida’s Deceptive and Unfair Trade Practices Act.
The Montoya/Peralta lawsuit appeared the same week as a September 16, 2025, hearing of the U.S. Senate Judiciary Committee on “Examining the Harm of AI Chatbots.” The panel included Matthew Raine and Megan Garcia as well as “Jane Doe,” a mother from Texas who filed suit in December 2024 alleging that her son used a chatbot suggesting that “killing us, his parents, would be an understandable response to our efforts [to limit] his screen time.”
Senator Josh Hawley (R-MO), who chairs the U.S. Senate Subcommittee on Crime and Counterterrorism and who conducted the hearing, took the issue seriously:
The testimony that you are going to hear today is not pleasant. But it is the truth and it’s time that the country heard the truth. About what these companies are doing, about what these chatbots are engaged in, about the harms that are being inflicted upon our children, and for one reason only. I can state it in one word, profit.
Representatives from certain companies that develop AI chatbots reportedly declined the invitation to appear at the congressional hearing or to send a response.
On September 11, 2025, the Food and Drug Administration (FDA) announced that a November 6 meeting of its Digital Health Advisory Committee would focus on “Generative AI-enabled Digital Mental Health Medical Devices.” FDA is establishing a docket for public comment on this meeting; comments received on or before October 17, 2025, will be provided to the committee.
Although FDA has reviewed and authorized certain digital therapeutics, generative AI products currently on the market have generally not been subject to FDA premarket review and are not subject to quality system regulations governing product design and production, or postmarket surveillance requirements. Were FDA to change the playing field for these products, it could have a major impact on access to these products in the U.S. market, producing substantial headwinds (e.g., barriers to market entry) or tailwinds (e.g., enhancing consumer trust, and competitive benefits for FDA-cleared products), depending on your point of view.
All stakeholders (practitioners, software developers and innovators, investors, and the public at large) should be paying close attention to FDA developments and considering how to effectively advocate for their points of view. Innovators also should be thinking about how to future-proof themselves against major disruptions due to (very likely) regulatory changes by, for example, building datasets substantiating product value to individuals, implementing procedures and processes to mitigate risks being introduced through product design, and adopting strategies to identify and address emergent safety concerns. If products’ regulatory status is called into doubt or clearly changes in the future, these steps can help innovators be prepared to address their products with FDA if they are contacted.
The FTC announced its own inquiry on September 11, issuing orders to seven companies providing consumer-facing AI chatbots to provide information on how those companies measure, test, and monitor potentially negative impacts of this technology on children and teens. The inquiry “seeks to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions, to limit the products’ use by and potential negative effects on children and teens, and to apprise users and parents of the risks associated with the products.”
The timing here is not coincidental. FDA and FTC routinely coordinate on enforcement of laws concerning consumer (nonprescription) products and will likely be considering how to most efficiently implement changes to regulation.
Federal legislators recently introduced bills to prevent harm to minors’ mental health due to AI chatbots; these proposals highlight enforcement by the Federal Trade Commission (FTC) and the state attorneys general. Key federal bills include:
States including Utah, California, Illinois, and New York have already undertaken legislative efforts relating to AI and mental health, seeking to impose obligations on developers and clarifying permissible applications of AI in mental health therapy (see a summary by EBG colleagues here). New York’s S. 3008, “Artificial Intelligence Companion Models,” takes effect November 5. It defines “AI companion” as an AI “designed to simulate a sustained human or human-like relationship with a user” that facilitates “ongoing engagement” and asks “unprompted or unsolicited emotion-based questions” about “matters personal to the user.” The bill also defines “human relationships” as those that are “intimate, romantic or platonic interactions or companionship.” The AI companion must have a protocol for detecting “user expressions of suicidal ideation or self harm,” and it must notify the user of a suicide prevention and behavioral health crisis hotline. The AI must also provide notifications at the beginning of any interaction, and throughout the interaction—at least every three hours—that state that the user is not communicating with a human.
On September 22, 2025, the California legislature presented to the governor for signature SB 243, Companion Chatbots, which would amend the Business and Professions Code. If signed, this law will take effect July 1, 2027. The law closely tracks New York’s law: it requires the AI to provide notifications every three hours that the user that it is not human, and it also requires protocols to detect suicidal ideation. Interestingly, this law provides a private right of action for injunctive relief, damages of up to $1,000 per violation, and attorney’s fees and costs.
Illinois HB 1806, the Therapy Resources Oversight Act, took effect on August 1, 2025. It is designed to ensure that therapy or psychotherapy services are delivered by qualified, licensed or certified professionals and to protect consumers from unlicensed or unqualified providers, including unregulated AI systems. AI use by a licensed professional is permitted when assisting in providing “supplementary support in therapy or psychotherapy services where the licensed professional maintains full responsibility for all interactions, outputs and data systems.” The new law prohibits an individual, corporation, or entity from providing or advertising, offering therapy or psychotherapy services, including through Internet-based AI, unless the services are conducted by a licensed professional. A proposed law in New York, S. 8484, would also prohibit licensed mental health professionals from using AI tools in client care, except in administrative or supplemental support activities where the client has given informed consent.
Other comprehensive state laws relating to AI and consumer protection, such as the impending law in Colorado, may also be implicated in the context of AI chatbots and mental health.
The issues surrounding AI mental health chatbots, potential liability, and increasing probability of regulatory actions continue to develop quickly—against a federal backdrop of fostering AI innovation. Developers and investors should already be following the cases and laws in this area. Health care providers and social workers should familiarize themselves with the specific laws that could affect them as practitioners, and with chatbot apps they recommend or use, as well as data protection issues. We add here that more employers are offering mental health chatbots to employees, which could raise liability concerns:
The issues concerning the safety and security of wellness bots and various therapeutic AI modalities continue to evolve.
More Upcoming Events
Sign Up for any (or all) of our 25+ Newsletters
You are responsible for reading, understanding, and agreeing to the National Law Review’s (NLR’s) and the National Law Forum LLC’s Terms of Use and Privacy Policy before using the National Law Review website. The National Law Review is a free-to-use, no-log-in database of legal and business articles. The content and links on www.NatLawReview.com are intended for general information purposes only. Any legal analysis, legislative updates, or other content and links should not be construed as legal or professional advice or a substitute for such advice. No attorney-client or confidential relationship is formed by the transmission of information between you and the National Law Review website or any of the law firms, attorneys, or other professionals or organizations who include content on the National Law Review website. If you require legal or professional advice, kindly contact an attorney or other suitable professional advisor.
Some states have laws and ethical rules regarding solicitation and advertisement practices by attorneys and/or other professionals. The National Law Review is not a law firm nor is www.NatLawReview.com intended to be a referral service for attorneys and/or other professionals. The NLR does not wish, nor does it intend, to solicit the business of anyone or to refer anyone to an attorney or other professional. NLR does not answer legal questions nor will we refer you to an attorney or other professional if you request such information from us.
Under certain state laws, the following statements may be required on this website and we have included them in order to be in full compliance with these rules. The choice of a lawyer or other professional is an important decision and should not be based solely upon advertisements. Attorney Advertising Notice: Prior results do not guarantee a similar outcome. Statement in compliance with Texas Rules of Professional Conduct. Unless otherwise noted, attorneys are not certified by the Texas Board of Legal Specialization, nor can NLR attest to the accuracy of any notation of Legal Specialization or other Professional Credentials.
The National Law Review – National Law Forum LLC 2070 Green Bay Rd., Suite 178, Highland Park, IL 60035 Telephone (708) 357-3317 or toll-free (877) 357-3317. If you would like to contact us via email please click here.
Copyright ©2025 National Law Forum, LLC
Find Your Next Job !



In the wake of a lawsuit filed in federal district court in California in August—alleging that an artificial intelligence (AI) chatbot encouraged a 16-year-old boy to commit suicide—a similar suit filed in September is now claiming that an AI chatbot is responsible for death of a 13-year-old girl.
It’s the latest development illustrating a growing tension between AI’s promise to improve access to mental health support and the alleged perils of unhealthy reliance on AI chatbots by vulnerable individuals. This tension is evident in recent reports that some users, particularly minors, are becoming addicted to AI chatbots, causing them to sever ties with supportive adults, lose touch with reality and, in the worst cases, engage in self-harm or harm to others.
While not yet reflected in diagnostic manuals, experts are recognizing the phenomenon of “AI psychosis”—distorted thoughts or delusional beliefs triggered by interactions with AI chatbots. According to Psychology Today, the term describes cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals. Evidence indicates that AI psychosis can develop in people with or without a preexisting mental health issue, although the former is more common.
A recent article in Modern Healthcare reported that the increased scrutiny of AI chatbots is not preventing digital health companies from investing in AI development to meet the rising demand for mental health tools. Yet the issue of AI and mental health encompasses not only minors, developers, and investors but also health care providers, therapists, and employers in all industries, including health care. On October 1, 2025, a coalition of leaders from academia, health care, tech, and employee benefits announced the formation of an AI in Mental Health Safety & Ethics Council, a cross-disciplinary team advancing the development of universal standards for the safe, ethical, and effective use of AI in mental health care. Existing lawsuits from parents are demonstrating various avenues for liability in a broad range of contexts, and the seriousness of those lawsuits may prompt Congress to act. In this post, we explore some of the many unfolding developments.
Cynthia Montoya and William Peralta’s lawsuit, filed in the U.S. District Court for the District of Colorado on September 15, alleges that defendants including Character Technologies, Inc. marketed a product that ultimately caused their daughter to commit suicide by hanging within months of opening a C.AI account. They allege claims including strict product liability (defective design); strict liability (failure to warn); negligence per se (child sexual abuse, sexual solicitation, and obscenity); negligence (defective design); negligence (failure to warn); wrongful death and survivorship; unjust enrichment; and violations of the Colorado Consumer Protection Act.
Matthew and Maria Raine’s lawsuit, filed in California Superior Court, County of San Francisco, on August 26, alleges that defendants including OpenAI, Inc. created a product, ChatGPT, that helped their 16-year-old son commit suicide by hanging. The Raines allege claims including strict liability (design defect and failure to warn); negligence (design defect and failure to warn); violation of California’s Business and Professional Code, Unfair Competition Law, and California Penal Code (criminalizing aiding, advising, or encouraging another to commit suicide); and wrongful death and survivorship.
Megan Garcia filed suit in U.S. District Court for the Middle District of Florida (Orlando) in October 2024 against Character Technologies Inc. and others, claiming that her son’s interactions with an AI chatbot caused his mental health to decline to the point where the teen committed suicide to “come home” to the bot. An amended complaint filed in July 2025 alleges strict product liability (defective design); strict liability (failure to warn); aiding and abetting; negligence per se (sexual abuse and sexual solicitation); negligence (defective design); negligence (failure to warn); wrongful death and survivorship; unjust enrichment; and violations of Florida’s Deceptive and Unfair Trade Practices Act.
The Montoya/Peralta lawsuit appeared the same week as a September 16, 2025, hearing of the U.S. Senate Judiciary Committee on “Examining the Harm of AI Chatbots.” The panel included Matthew Raine and Megan Garcia as well as “Jane Doe,” a mother from Texas who filed suit in December 2024 alleging that her son used a chatbot suggesting that “killing us, his parents, would be an understandable response to our efforts [to limit] his screen time.”
Senator Josh Hawley (R-MO), who chairs the U.S. Senate Subcommittee on Crime and Counterterrorism and who conducted the hearing, took the issue seriously:
The testimony that you are going to hear today is not pleasant. But it is the truth and it’s time that the country heard the truth. About what these companies are doing, about what these chatbots are engaged in, about the harms that are being inflicted upon our children, and for one reason only. I can state it in one word, profit.
Representatives from certain companies that develop AI chatbots reportedly declined the invitation to appear at the congressional hearing or to send a response.
On September 11, 2025, the Food and Drug Administration (FDA) announced that a November 6 meeting of its Digital Health Advisory Committee would focus on “Generative AI-enabled Digital Mental Health Medical Devices.” FDA is establishing a docket for public comment on this meeting; comments received on or before October 17, 2025, will be provided to the committee.
Although FDA has reviewed and authorized certain digital therapeutics, generative AI products currently on the market have generally not been subject to FDA premarket review and are not subject to quality system regulations governing product design and production, or postmarket surveillance requirements. Were FDA to change the playing field for these products, it could have a major impact on access to these products in the U.S. market, producing substantial headwinds (e.g., barriers to market entry) or tailwinds (e.g., enhancing consumer trust, and competitive benefits for FDA-cleared products), depending on your point of view.
All stakeholders (practitioners, software developers and innovators, investors, and the public at large) should be paying close attention to FDA developments and considering how to effectively advocate for their points of view. Innovators also should be thinking about how to future-proof themselves against major disruptions due to (very likely) regulatory changes by, for example, building datasets substantiating product value to individuals, implementing procedures and processes to mitigate risks being introduced through product design, and adopting strategies to identify and address emergent safety concerns. If products’ regulatory status is called into doubt or clearly changes in the future, these steps can help innovators be prepared to address their products with FDA if they are contacted.
The FTC announced its own inquiry on September 11, issuing orders to seven companies providing consumer-facing AI chatbots to provide information on how those companies measure, test, and monitor potentially negative impacts of this technology on children and teens. The inquiry “seeks to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions, to limit the products’ use by and potential negative effects on children and teens, and to apprise users and parents of the risks associated with the products.”
The timing here is not coincidental. FDA and FTC routinely coordinate on enforcement of laws concerning consumer (nonprescription) products and will likely be considering how to most efficiently implement changes to regulation.
Federal legislators recently introduced bills to prevent harm to minors’ mental health due to AI chatbots; these proposals highlight enforcement by the Federal Trade Commission (FTC) and the state attorneys general. Key federal bills include:
States including Utah, California, Illinois, and New York have already undertaken legislative efforts relating to AI and mental health, seeking to impose obligations on developers and clarifying permissible applications of AI in mental health therapy (see a summary by EBG colleagues here). New York’s S. 3008, “Artificial Intelligence Companion Models,” takes effect November 5. It defines “AI companion” as an AI “designed to simulate a sustained human or human-like relationship with a user” that facilitates “ongoing engagement” and asks “unprompted or unsolicited emotion-based questions” about “matters personal to the user.” The bill also defines “human relationships” as those that are “intimate, romantic or platonic interactions or companionship.” The AI companion must have a protocol for detecting “user expressions of suicidal ideation or self harm,” and it must notify the user of a suicide prevention and behavioral health crisis hotline. The AI must also provide notifications at the beginning of any interaction, and throughout the interaction—at least every three hours—that state that the user is not communicating with a human.
On September 22, 2025, the California legislature presented to the governor for signature SB 243, Companion Chatbots, which would amend the Business and Professions Code. If signed, this law will take effect July 1, 2027. The law closely tracks New York’s law: it requires the AI to provide notifications every three hours that the user that it is not human, and it also requires protocols to detect suicidal ideation. Interestingly, this law provides a private right of action for injunctive relief, damages of up to $1,000 per violation, and attorney’s fees and costs.
Illinois HB 1806, the Therapy Resources Oversight Act, took effect on August 1, 2025. It is designed to ensure that therapy or psychotherapy services are delivered by qualified, licensed or certified professionals and to protect consumers from unlicensed or unqualified providers, including unregulated AI systems. AI use by a licensed professional is permitted when assisting in providing “supplementary support in therapy or psychotherapy services where the licensed professional maintains full responsibility for all interactions, outputs and data systems.” The new law prohibits an individual, corporation, or entity from providing or advertising, offering therapy or psychotherapy services, including through Internet-based AI, unless the services are conducted by a licensed professional. A proposed law in New York, S. 8484, would also prohibit licensed mental health professionals from using AI tools in client care, except in administrative or supplemental support activities where the client has given informed consent.
Other comprehensive state laws relating to AI and consumer protection, such as the impending law in Colorado, may also be implicated in the context of AI chatbots and mental health.
The issues surrounding AI mental health chatbots, potential liability, and increasing probability of regulatory actions continue to develop quickly—against a federal backdrop of fostering AI innovation. Developers and investors should already be following the cases and laws in this area. Health care providers and social workers should familiarize themselves with the specific laws that could affect them as practitioners, and with chatbot apps they recommend or use, as well as data protection issues. We add here that more employers are offering mental health chatbots to employees, which could raise liability concerns:
The issues concerning the safety and security of wellness bots and various therapeutic AI modalities continue to evolve.



More Upcoming Events
Sign Up for any (or all) of our 25+ Newsletters
You are responsible for reading, understanding, and agreeing to the National Law Review’s (NLR’s) and the National Law Forum LLC’s Terms of Use and Privacy Policy before using the National Law Review website. The National Law Review is a free-to-use, no-log-in database of legal and business articles. The content and links on www.NatLawReview.com are intended for general information purposes only. Any legal analysis, legislative updates, or other content and links should not be construed as legal or professional advice or a substitute for such advice. No attorney-client or confidential relationship is formed by the transmission of information between you and the National Law Review website or any of the law firms, attorneys, or other professionals or organizations who include content on the National Law Review website. If you require legal or professional advice, kindly contact an attorney or other suitable professional advisor.
Some states have laws and ethical rules regarding solicitation and advertisement practices by attorneys and/or other professionals. The National Law Review is not a law firm nor is www.NatLawReview.com intended to be a referral service for attorneys and/or other professionals. The NLR does not wish, nor does it intend, to solicit the business of anyone or to refer anyone to an attorney or other professional. NLR does not answer legal questions nor will we refer you to an attorney or other professional if you request such information from us.
Under certain state laws, the following statements may be required on this website and we have included them in order to be in full compliance with these rules. The choice of a lawyer or other professional is an important decision and should not be based solely upon advertisements. Attorney Advertising Notice: Prior results do not guarantee a similar outcome. Statement in compliance with Texas Rules of Professional Conduct. Unless otherwise noted, attorneys are not certified by the Texas Board of Legal Specialization, nor can NLR attest to the accuracy of any notation of Legal Specialization or other Professional Credentials.
The National Law Review – National Law Forum LLC 2070 Green Bay Rd., Suite 178, Highland Park, IL 60035 Telephone (708) 357-3317 or toll-free (877) 357-3317. If you would like to contact us via email please click here.
Copyright ©2025 National Law Forum, LLC