OpenAI Lets Parents Track Kids’ ChatGPT Activity – PYMNTS.com


OpenAI introduced new ways for parents to monitor their children’s ChatGPT use.

Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.






yesSubscribe to our daily newsletter, PYMNTS Today.
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.



The artificial intelligence startup’s parental controls now let parents connect their accounts with their teen’s account, according to a Monday (Sept. 29) company blog post.
The announcement came weeks after OpenAI said it was developing new child safety measures for its AI chatbot, including an age verification system.
“Teens are growing up with AI, and it’s on us to make sure ChatGPT meets them where they are,” OpenAI said in a Sept. 16 blog post. “The way ChatGPT responds to a 15-year-old should look different than the way it responds to an adult.”
If OpenAI is not confident about a user’s age or has incomplete information, it will “default to the under-18 experience,” the post said.
These new measures came weeks after a lawsuit from the parents of a teen who died by suicide after conversations with ChatGPT in which the chatbot allegedly encouraged the boy’s actions. The Federal Trade Commission, meanwhile, said it wants to study how AI can affect children’s mental health and safety.
Advertisement: Scroll to Continue
The FTC announced earlier this month that it is issuing orders to OpenAI and six other providers of AI chatbots seeking information on how those companies measure and monitor potentially harmful impacts of their technology on young users.
The other companies include Google, Character.AI, Instagram, Meta, Snap and xAI.
“AI chatbots may use generative artificial intelligence technology to simulate human-like communication and interpersonal relationships with users,” the FTC said in a Sept. 11 news release. “AI chatbots can effectively mimic human characteristics, emotions and intentions, and generally are designed to communicate like a friend or confidant, which may prompt some users, especially children and teens, to trust and form relationships with chatbots.”
Also Monday, OpenAI shared some of the patterns it has seen from users attempting to share or generate child sexual abuse material (CSAM) and child sexual exploitation material (CSEM).
“In some cases, we encounter users attempting to coax the model into engaging in fictional sexual roleplay scenarios while uploading CSAM as part of the narrative,” the company wrote in a blog post. “We have also seen users attempt to coax the model into writing fictional stories where minors are put in sexually inappropriate and/or abusive situations—which is a violation of our child safety policies, and we take swift action to detect these attempts and ban the associated accounts.”
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.
OpenAI Lets Parents Track Kids’ ChatGPT Activity
Worldpay Launches Embedded Finance Solution for Platform Partners
Thunes Offers Pay-to-Banks Solution to Speed X-Border Payments
Swift Adding Blockchain-Based Shared Ledger to Infrastructure
We’re always on the lookout for opportunities to partner with innovators and disruptors.

source

Jesse
https://playwithchatgtp.com