ChatGPT Leaks Sensitive Data – Spiceworks News and Insights
The leaks exposed conversations, personal data, and login credentials.
Concerns regarding the security and privacy of generative AI tools have risen following reports of leaked conversations through OpenAI’s ChatGPT. The incident has raised questions about vulnerabilities in AI systems despite companies’ efforts to set up safeguards. Apparently, sensitive data, including user names and passwords, was leaked to unrelated users during sessions.
The leaks are expected to have included details of another user’s proposals and presentations, a major violation of OpenAI’s privacy policies. According to the user complaint, the incident occurred despite strong passwords and security measures.
See More: ChatGPT Accused of Privacy Violations by Italian Watchdog
According to OpenAI, the data leak is due to an attack by a hacker on compromised accounts, with the conversation apparently originating in Sri Lanka instead of the user’s real location, Brooklyn in the U.S. This is not the first time OpenAI has had to address security concerns of users. In March 2023, a ChatGPT bug was found to have leaked user payment data.
In another instance, ChatGPT accidentally leaked company secrets belonging to Samsung, leading to an internal ban on the tool. The current incident will likely highlight the challenges associated with improving security and privacy in artificial intelligence technologies and large language models. Leading artificial intelligence companies such as OpenAI, Google, and Anthropic must focus on vigilant security postures and specific measures to prevent such risks.
What do you think about data security issues with AI tech? Let us know your thoughts on LinkedIn, X, or Facebook. We’d love to hear from you!
Image source: Shutterstock
LATEST NEWS STORIES
Assistant Editor – Tech, SWZD