Breaking Down The CHAT Act, A Step Toward Federal Rules on AI Companions – Tech Policy Press

Home
Sen. Jon Husted (R-OH) Source
In recent months, United States lawmakers have turned their attention to a fast-growing area of artificial intelligence: companions — chatbots that simulate friendship, mentorship or even romantic relationships. With the rise of systems capable of emotional and personalized engagement and several tragic suicides linked to interactions with such products, policymakers are increasingly concerned about their impact on minors.
A recent survey by Common Sense Media found that 72% of teenagers aged 13 to 17 have used an AI companion, and more than half do so regularly. Another study released by dating app company Match found that 16% of adult US singles have interacted with an AI companion as a romantic partner, with that figure rising to 33% among Gen Z users. While these systems can simulate providing emotional support or carrying a conversation, the psychological risks of dependency and and on social development are significant — especially for vulnerable groups such as minors.
Seton Hall law professor Gaia Bernstein warns that the “window of opportunity is now” and that failing to act could repeat the regulatory mistakes made with social media. Similarly, the American Psychological Association (APA) has urged developers and policymakers to establish healthy boundaries in simulated human relationships.
Amid this rising scrutiny, one notable federal proposal has emerged as a potential vehicle to tackle these concerns, known as the Children Harmed by AI Technology, or CHAT, Act (S. 2714).
Introduced in September by Sen. Jon Husted (R-OH), the CHAT Act seeks to regulate the design and operation of AI companions, particularly those accessible to minors.
Although the bill is still in its early stages, it marks one of the first federal efforts to define ethical and legal boundaries around AI systems that build simulated human relationships. Whether it passes or not, the proposal signals that US lawmakers are beginning to take AI companions seriously.
At its core, the CHAT Act aims to shield minors from explicit sexual content and harmful behavior generated by AI companions. The bill’s sponsors argue that these systems can blur the line between simulation and reality, potentially fostering emotional dependency or exposing young users to sexualized conversations. “I wrote this bill to help parents keep their kids safe and to make sure Americans harness the benefits of AI while putting strong safeguards in place,” Husted said in a press release announcing the bill.
The legislation defines an AI companion as any software-based AI designed “for the primary purpose of simulating interpersonal or emotional interaction, friendship, companionship, or therapeutic communication.” Under the measure, developers would especially need to:
The bill also includes a “safe harbor” clause to limit liability for companies that, in good faith, rely on user-provided age information and comply with recognized age-verification and safety standards.
Separately, House lawmakers the same month introduced the AI Warnings and Resources for Education, or AWARE, Act (H.R. 5360), which would require federal agencies to create public educational resources to help parents, educators and minors understand safe AI chatbot use, privacy risks and responsible supervision practices.
US regulation of AI companions has begun to fragment at the state level, with states including New York, California and North Carolina taking or proposing distinct approaches to the issue. Together, the CHAT and AWARE proposals suggest a growing push to craft a unified federal standard — a goal echoed by President Donald Trump in July, when he said we “have to have a single federal standard, not 50 different states regulating this industry of the future.” Many state legislators have argued AI regulation may often be better handled by Congress.
Still, both bills remain in early legislative stages, and advocacy groups have voiced both support and opposition for federal attempts at regulating AI companions.
The strongest opposition has come from a coalition of free-market and digital rights organizations, which published a letter in September warning of the CHAT Act’s potential privacy and data protection risks, including that the bill would require children to give government IDs or biometrics to technology companies that might be bad actors. As the proposed law would additionally force parents to prove their status as the child's legal guardian if they want to give parental consent to their child’s AI companion use, the CHAT Act would require the collection of very sensitive data leading to notable risks of misuse. These fears are not unfounded. A recent data breach involving government ID photos used for age verification on Discord shows how easily sensitive information can be exposed.
You have successfully joined our subscriber list.
The risk with AI companions was illustrated by a recent case where intimate conversations and content exchanged between 400,000 users on two different AI companion apps were exposed on the internet due to an unprotected streaming and content delivery system. While no government IDs, biometrics, or parent-child relationship information were involved in this case, this could have been different if the AI companion providers had been legally required to store such data for age verification purposes. Requiring providers to collect sensitive data for age verification would therefore drastically heighten the privacy and misuse risks in the event of a data breach.
While parental monitoring and age verification could enhance safety, lawmakers must carefully weigh these benefits against privacy risks and the limited reliability of verification technologies. The CHAT Act’s requirement for “a commercially available method or process that is reasonably designed to ensure accuracy” remains vague, raising questions about what technologies qualify and how privacy-invasive and effective they might be.
Another concern is the breadth of the CHAT Act’s scope. It applies to any AI system whose main purpose is “simulating interpersonal […] interaction.” This would appear to include apps like Replika, CrushOn AI, Spicychat and Character AI, which are built around emotional or romantic engagement.
However, the language could also extend to general-purpose AI systems such as Gemini or ChatGPT, which are increasingly used for companionship and even to AI-assisted learning apps like Duolingo or social chatbots embedded in video games. This is because nearly all AI chatbots involve interpersonal interactions involving a certain degree of anthropomorphism.
Applying the law to the first category seems reasonable given the emotional and behavioral risks. But extending it to educational or gaming contexts without any exceptions raises questions about whether the CHAT Act’s scope is too broad — and whether it could unintentionally affect areas where risks like exposure to explicit content, encouragement of self-harm or emotional manipulation are limited.
Proponents of the bill argue that the risks justify strong safeguards. The advocacy group Count on Mothers described the CHAT Act as reflecting “what mothers across the country, from all backgrounds and political perspectives, consistently express in our national research: AI and tech platforms must be held accountable for how their tools impact children.” They argue that transparency, age verification and parental oversight are essential to child safety in an AI-driven world.
As with California’s SB 243, the CHAT Act is likely to raise First Amendment issues. AI-generated text is seen as enjoying protection under the First Amendment by many — though this is controversially debated. Restrictions on conversational content, such as prohibiting sexual topics, might therefore be contested as unconstitutionally restricted speech.
The CHAT Act’s mandatory disclosure requirements, which compel AI systems to announce their artificial nature periodically, could also raise compelled speech concerns. Similar debates have already emerged around California’s SB 243, which California Gov. Gavin Newsom (D) signed into law this month. Critics such as the Consumer Choice Center have warned that the law might set a troubling precedent for policing “artificial speech”, though one could argue that the law could withstand constitutional scrutiny given its child-protection rationale. At least regarding the CHAT Act’s proposed disclosure obligation, this would fall in line with the view of the California Assembly Standing Committee on Judiciary regarding a similar disclosure obligation under California’s SB 243.
Another challenge lies in the interaction between federal and state laws. Several states, like California, New York, North Carolina and Utah, have already introduced or enacted AI-related legislation, each partially addressing issues such as age verification, algorithmic transparency and online child safety.
Currently, the CHAT Act does not include federal preemption. Consequently, developers could face conflicting obligations across jurisdictions leading to a “regulatory maze” reminiscent of early internet privacy law. This tension could undermine the bill’s goals. Companies might withdraw from stricter states, while smaller startups could struggle with compliance costs. The unintended result could be further market concentration among large AI providers able to afford complex legal compliance. If federal preemption was to be introduced during the legislative process, the CHAT Act could override these conflicting state laws, simplifying compliance but potentially weakening stronger local protections.
For now, the CHAT Act’s future remains uncertain. It faces a divided political landscape and push back from free speech advocates and segments of the technology industry. Yet its central message of protecting children from predatory or manipulative AI systems has strong bipartisan appeal.
As Sen. Dick Durbin (D-IL) observed at a recent hearing on the topic of AI chatbots more broadly, “This is one of the few issues that unites a very diverse caucus in the Senate Judiciary Committee, the most conservative, the most liberal and everything in between.”