ChatGPT wrote code that can make databases leak sensitive information – New Scientist

Advertisement
Explore by section
Explore by subject
Explore our products and services
Six AI tools, including OpenAI’s ChatGPT, were exploited to write code capable of damaging commercial databases – although OpenAI appears to have now fixed the vulnerability
By Jeremy Hsu
25 October 2023

ChatGPT in web and mobile development

A vulnerability in Open AI’s ChatGPT – now fixed – could have been used by malicious actors

Amir Sajjad/Shutterstock

A vulnerability in Open AI’s ChatGPT – now fixed – could have been used by malicious actors
Amir Sajjad/Shutterstock
Researchers manipulated ChatGPT and five other commercial AI tools to create malicious code that could leak sensitive information from online databases, delete critical data or disrupt database cloud services in a first-of-its-kind demonstration.
The work has already led the companies responsible for some of the AI tools – including Baidu and OpenAI – to implement changes to prevent malicious users from taking advantage of the vulnerabilities.
“It’s the very first study to demonstrate that vulnerabilities of large language models in general can be exploited as an attack path to online commercial applications,” says Xutan Peng, who co-led the study while at the University of Sheffield in the UK.
Advertisement

Read more
Win $12k by rediscovering the secret phrases that secure the internet
Peng and his colleagues looked at six AI services that can translate human questions into the SQL programming language, which is commonly used to query computer databases. “Text-to-SQL” systems that rely on AI have become increasingly popular – even standalone AI chatbots, such as OpenAI’s ChatGPT, can generate SQL code that can be plugged into such databases.
The researchers showed how this AI-generated code can be made to include instructions to leak database information, which could open the door to future cyberattacks. It could also purge system databases that store authorised user profiles, including names and passwords, and overwhelm the cloud servers hosting the databases through a denial-of-service attack. Peng and his colleagues presented their work at the 34th IEEE International Symposium on Software Reliability Engineering on 10 October in Florence, Italy.
Sign up to our The Weekly newsletter
Receive a weekly dose of discovery in your inbox.
Their tests with OpenAI’s ChatGPT back in February 2023 revealed the standalone AI chatbot could generate SQL code that damaged databases. Even someone using ChatGPT to generate code in order to query a database for an innocent purpose – such as a nurse interacting with clinical records stored in a healthcare system database – might actually be given harmful SQL code that damaged the database.
“The code generated from these tools may be dangerous, but these tools may not even warn the user,” says Peng.
The researchers disclosed their findings to OpenAI. Their follow-up testing suggests that OpenAI has now updated ChatGPT to shut down the text-to-SQL issues.

Read more
Mathematician warns US spies may be weakening next-gen encryption
Another demonstration showed similar vulnerabilities in Baidu-UNIT, an intelligent dialogue platform offered by the Chinese tech giant Baidu that automatically converts client requests written in Chinese into SQL queries for Baidu’s cloud service. After the researchers sent a disclosure report with their testing results to Baidu in November 2022, the company gave them a financial reward for finding the weaknesses and patched the system by February 2023.
But unlike ChatGPT and other AIs that rely on large language models – which can perform new tasks without much or any prior training – Baidu’s AI-powered service leans more heavily on prewritten rules to carry out its text-to-SQL conversions.
Text-to-SQL systems based on large language models seem to be more easily manipulated into creating malicious code than older AIs that rely on prewritten rules, says Peng. But he still sees promise in using large language models for helping humans query databases, even if he describes the security risks as having “long been underrated before our study”.
Neither OpenAI nor Baidu responded to a New Scientist request for comment on the research.

Reference:
arXiv DOI: 10.48550/arXiv.2211.15363
Topics:
Advertisement
Explore the latest news, articles and features
News
Free
News
Subscriber-only
News
Subscriber-only
Analysis
Subscriber-only
Trending New Scientist articles
1
2
3
4
5
6
7
8
9
10
Advertisement
Download the app

source

Jesse
https://playwithchatgtp.com