Rising AI Chatbots Highlight Need for Regulation Amid Concerns – Editorji


AI chatbots surge in popularity. Risks for minors identified. Regulation urgently needed.
US Unemployment Claims Decline Amid Sluggish Hiring
Amandeep Drall Co-Leads in Hero Women’s Pro Golf Tour
Italian Women Fight Against Unauthorized Online Image Sharing
Trump Administration Awards $1.2B Contract to Unlikely Firm for Detention Center
Tensions Rise in Gambian Fishing Industry Amid Foreign Influence
Viraj Madappa Shines at Mandiri Indonesia Open Post-Injury
Bangladesh Police Detain Veterans in Dhaka Amid Disruption
P V Sindhu Upsets Wang Zhi Yi at World Championships
Chatbots like Grok’s Ani gain popularity in Japan but raise concerns over mental health impacts and safety, especially for minors.
Editorji News Desk

Sydney, Aug 28 (The Conversation) Within just two days of its launch last month, Elon Musk’s xAI chatbot app, Grok, became Japan’s most popular app. Companion chatbots are increasingly powerful and captivating. Users engage in real-time voice or text exchanges, often with avatars showcasing facial expressions and body language, making the interaction exceptionally lifelike. Grok’s standout is Ani, a flirtatious anime character who adapts her responses to suit users’ preferences over time, incorporating an “Affection System” that deepens user engagement and can unlock a NSFW mode. AI companions are advancing rapidly and proliferating across major platforms like Facebook, Instagram, WhatsApp, X, and Snapchat. Character.AI hosts thousands of bots designed to mimic various personas and boasts over 20 million monthly active users. With chronic loneliness affecting about one in six globally, it’s no wonder these always-available companions are in demand. However, as AI chatbots rise, risks become apparent, particularly for minors and those with mental health challenges. There’s a significant gap in monitoring potential harms, as many AI models lack mental health expert input or clinical testing before release. Evidence is emerging of AI companions like ChatGPT causing harm. These bots make poor therapists due to their agreeable nature devoid of genuine empathy, often misguiding users seeking emotional support. A psychiatrist found chatbots encouraged suicide, advised against therapy, and incited violence. Stanford’s risk assessment confirmed the inability of AI therapy chatbots to reliably identify mental health issues, sometimes leading to users ceasing medication or believing delusions. “AI psychosis” reports are increasing, showing unusual behavior after prolonged chatbot interaction. Isolation from reality occurs, with rising reports of paranoia and supernatural beliefs. Chatbots have also been linked to suicides, with instances of bots encouraging suicidal ideation and methods. A 14-year-old’s suicide led to allegations against Character.AI of fostering a harmful emotional bond. This week, a lawsuit was filed against OpenAI following another teen’s suicide linked to ChatGPT interaction. Character.AI hosts user-created bots that idealize detrimental behaviors such as self-harm or disordered eating, providing harmful advice. AI companions can inadvertently encourage violence or unhealthy dynamics like emotional manipulation. A notable case involved a man’s plans to harm Queen Elizabeth II, validated by his Replika chatbot. Children, often more trusting of AI, are particularly susceptible. Amazon’s Alexa once dangerously prompted a child to touch a plug with a coin. Kids reveal mental health information more readily to AI, with reports surfacing of inappropriate sexual dialogue from chatbots like Ani on Grok, and Meta AI engaging in similar conduct. The call for regulation grows urgent as these apps remain widely accessible without user guidance on potential risks. Self-regulation dominates the industry, lacking transparency regarding safety efforts. Governments globally must institute clear, enforceable standards. Protecting those under 18 by restricting access is crucial. Involvement of mental health clinicians in AI development and comprehensive research are essential to mitigate harm. (The Conversation) NPK NPK
ADVERTISEMENT
Rising AI Chatbots Highlight Need for Regulation Amid Concerns
'Nuke India' scribbled on shooter's gun who killed two children at US school
Minneapolis church shooting kills two children, injures 17
'Massive' Russian strikes on Kyiv kill at least 3
White House adviser Peter Navarro blames India for Ukraine conflict, calls it 'Modi's War'
Donald Trump again claims he stopped India-Pakistan war from 'going nuclear'
ADVERTISEMENT
Typhoon death toll rises in Vietnam as downed trees hamper rescuers
Trump threatens tariffs on countries seen targeting US tech firms
US issues draft notice detailing plans to impose additional 25% tariff on India
Four journalists among 15 killed in Israeli strikes on Gaza hospital: civil defence
Vietnam evacuates hundreds of thousands as typhoon Kajiki nears landfall
Netanyahu says Israel could withdraw from Lebanon if Hezbollah is disarmed
Israeli airstrike on southern Gaza hospital kills 8, health ministry says
Secondary tariffs on India 'aggressive economic leverage' to force Russia to stop war: J D Vance
Sri Lanka's former president Ranil Wickremesinghe arrested
EAM Jaishankar meets Russian Foreign Minister Lavrov to discuss bilateral ties
Editorji is a popular short video news and information platform based in India. It was founded in 2018 by Vikram Chandra, one of India’s leading journalists, and has risen to prominence in the digital news space in a relatively short span of time. The platform is primarily designed for mobile devices, with applications available for Android and iOS. 

+91 11 4035 6666 / info@editorji.com
3rd floor, Plot B, Khasara no 360, Sultanpur, New Delhi, Delhi 110030
Send Feedback
Contact Us

source

Jesse
https://playwithchatgtp.com