With privacy concerns rising, can we teach AI chatbots to forget? – New Scientist

Advertisement
Explore by section
Explore by subject
Explore our products and services
The way AI systems work means that we can’t easily delete what they have learned. Now, researchers are seeking ways to remove sensitive information without having to retrain them from scratch
By Shubham Agarwal
31 October 2023
Pete Reynolds
I HAVE been writing on the internet for more than two decades. As a teenager, I left a trail of blogs and social media posts in my wake, ranging from the mundane to the embarrassing. More recently, as a journalist, I have published many stories about social media, privacy and artificial intelligence, among other things. So when ChatGPT told me that my output may have influenced its responses to other people’s prompts, I rushed to wipe my data from its memory.
As I quickly discovered, however, there is no delete button. AI-powered chatbots, which are trained on datasets including vast numbers of websites and online articles, never forget what they have learned.
That means the likes of ChatGPT are liable to divulge sensitive personal information, if it has appeared online, and that the companies behind these AIs will struggle to make good on “right-to-be-forgotten” regulations, which compel organisations to remove personal data on request. It also means we are powerless to stop hackers manipulating AI outputs by planting misinformation or malicious instructions in training data.
All of which explains why many computer scientists are scrambling to teach AIs to forget. While they are finding that it is extremely difficult, “machine unlearning” solutions are beginning to emerge. And the work could prove vital beyond addressing concerns over privacy and misinformation. If we are serious about building AIs that learn and think like humans, we might need to engineer them to forget.
The new generation of AI-powered chatbots like ChatGPT and Google’s Bard, which produce text in response to our prompts, are underpinned by large language models (LLMs). These are trained …
Advertisement
To continue reading, subscribe today with our introductory offers
No commitment, cancel anytime*
Offer ends 28th October 2023.
*Cancel anytime within 14 days of payment to receive a refund on unserved issues.
Inclusive of applicable taxes (VAT)
Existing subscribers
Advertisement
Explore the latest news, articles and features
Culture
Free
News
Free
News
Free
News
Free
Trending New Scientist articles
1
2
3
4
5
6
7
8
9
10
Advertisement
Download the app

source

Jesse
https://playwithchatgtp.com