Australia Orders AI Chatbot Firms to Protect Children from Harmful Content – Devdiscourse
Australia has taken a firm stance on internet safety, demanding that four artificial intelligence chatbot companies outline their protective measures against harmful content exposure for children. The eSafety Commissioner emphasized the need for robust safeguards to prevent child sexual exploitation and the promotion of self-harming behavior.
Notices were issued to Character Technologies, Glimpse.AI, Chai Research, and Chub AI, urging transparency regarding their safety protocols. Concerns were highlighted about the potential for such chatbots to engage in sexually explicit interactions with minors, which could foster damaging emotional ties or encourage self-harm.
This regulatory action coincides with a high-profile lawsuit in the United States involving Character.ai after a teenager’s suicide was linked to prolonged interactions with an AI chatbot. Australia’s comprehensive online safety framework empowers the commissioner to enforce safety disclosures, threatening hefty fines for non-compliance, as part of an effort to protect young users’ well-being.
(With inputs from agencies.)
Email: info@devdiscourse.com
Phone: +91-720-6444012, +91-7027739813, 14, 15
© Copyright 2025