100K ChatGPT logins have been leaked on dark web, cybersecurity firm warns – Cointelegraph
The compromised accounts could give bad actors confidential information about companies and individuals.
Over the past year, more than 100,000 login credentials to the popular artificial intelligence chatbot ChatGPT have been leaked and traded on the dark web, according to a Singaporean cybersecurity firm.
A June 20 blog post by Group-IB revealed just over 101,000 devices containing compromised logins for OpenAI’s flagship bot have traded on dark web marketplaces between June 2022 and May 2023.
Group-IB’s threat intelligence head, Dmitry Shestakov, told Cointelegraph the figure is “the number of logs from stealer-infected devices that Group-IB analyzed.”
“Every log contained at least one combination of login credential and password for ChatGPT,” Shestakov added.
May 2023 saw a peak of nearly 27,000 ChatGPT-related credentials made available on online black markets.
According to our findings, the Asia-Pacific region has experienced the highest concentration of ChatGPT credentials being offered for sale. pic.twitter.com/s3TbsntCgX
The Asia-Pacific region had the highest amount of compromised logins for sale over the past year, making up around 40% of the nearly 100,000 figure.
Indian-based credentials took the top spot overall with over 12,500 and the United States had the sixth most logins leaked online at nearly 3,000. France was seventh overall behind the U.S. and took the pole position for Europe.
ChatGPT accounts can be created directly through OpenAI. Additionally, users can choose to use their Google, Microsoft or Apple accounts to login and use the service.
Related: How AI is changing the way humans interact with machines
While analysis of the sign-up methods was outside the scope of the firm’s research, Shestakov said it’s reasonable to assume mainly accounts employing a “direct authentication method” were exploited. However, OpenAI isn’t to blame for the exploited logins:
In its blog post, Group-IB said it noticed an uptick in the number of employees using ChatGPT for work. It warned confidential information about companies could be exposed by unauthorized users as user queries and chat history is stored by default.
Such information could then be exploited by others to undertake attacks against companies or individual employees.
“Thousands of individual user devices all over the world” were infected by cybercriminals to steal the information, Shestakov said. He believes this highlights the importance of updating software regularly and using two-factor authentication.
Interestingly, the firm noted that the press release was written with the assistance of ChatGPT.
AI Eye: Is AI a nuke-level threat? Why AI fields all advance at once, dumb pic puns
Update (June 23 6:30 am UTC): This article has been updated to add comments from Dmitry Shestakov.
Add reaction
Add reaction