ChatGPT: European regulators take chatbots to task – Lexology
Review your content’s performance and reach.
Become your target audience’s go-to resource for today’s hottest topics.
Understand your clients’ strategies and the most pressing issues they are facing.
Keep a step ahead of your key competitors and benchmark against them.
add to folder:
Find out more about Lexology or get in touch by visiting our About page.
The proliferation of the use of ChatGPT has recently generated interest from European data protection authorities who are concerned with how user data, linked to registration details, and data which is entered into and processed by the platform itself, is being utilised. These concerns are a logical part of the discussion around how AI platforms, including natural language processors and large language models, should be regulated.
The initial commotion around ChatGPT focused on how it could be utilised, whether for good or ill. As we discussed in an article which we published in February 2023, one potential exploitation was by threat actors seeking to refine phishing materials. This anxiety has only been exacerbated by reports that users are inputting commercially sensitive information into the platform when testing its capabilities. There are also fears that users are entering confidential information and personal data into the chatbot, potentially leading to breaches of the EU GDPR/UK GDPR. Businesses need to be aware of the risks associated with use of AI platforms by their employees, whether sanctioned or unauthorised, and will need to have policies and measures in place to avoid deliberate or inadvertent breaches of the GDPR regarding such use.
The question increasingly being asked is: How is OpenAI, the developer of ChatGPT, using the data it has obtained via the tool?
OpenAI’s privacy policy states that it collects the IP addresses of users, device information, browser type and settings, and how they interact with the site, including specific features, actions or types or content. Their privacy policy also states that in “certain circumstances we may provide [users’] Personal Information to third parties without further notice to [users].”
Given the broad nature of this policy, and the aforementioned concerns, it is not surprising that urgent measures are being proposed, and even undertaken, by regulators to ensure that ChatGPT itself is compliant with data protection legislation.
Aside from the EU/UK GDPR, the draft AI Act, which is proposed European Union legislation first published in 2021, has recently been amended to accommodate the recent surge in ‘general purpose’ AI models, such as ChatGPT. Once implemented, the AI Act will now require generative AI models to disclose the use of any copyrighted materials used to develop the systems. In addition, these tools will now have to be developed and designed in line with EU laws and fundamental rights – there is an interesting overlap between some of the core obligations under the draft AI Act, relating to transparency, fairness etc., with the principles of the EU/UK GPDR.
Actions of European data protection authorities
In April, the European Data Protection Board (“EDPB”) decided to launch a dedicated task force to foster cooperation and to exchange information on possible enforcement action conducted by data protection authorities against ChatGPT. The discussion at EDPB level was instigated by the Spanish data protection agency (AEPD), and was preceded by the decision of the Italian data regulator, known as the Garante, to temporarily block ChatGPT from operating in Italy. The activity was prompted by a report of a data breach affecting user conversations and payment details.
OpenAI was ordered to undertake a series of measures before the Garante was be willing to lift the temporary suspension. These measures included:
OpenAI responded to these requests and the ban was subsequently lifted, with ChatGPT being permitted to recommence operating in Italy.
These measures give an indication as to the steps that OpenAI and other AI platforms will increasingly face in relation to their future operations in the EU and UK.
The Italian data protection regulator also expressed concerns about the lack of an age verification mechanism which meant that ChatGPT allowed children to receive “responses that are absolutely inappropriate to their age and awareness.” In response, OpenAI was required to immediately implement an age request system for the purpose of registration and submit an action plan to introduce a robust age verification system by 30 September 2023.
These steps are unsurprising, the Italian regulator has been at the vanguard of taking proactive steps in respect of the exposure of children to AI tools, as evidenced with recent measures against the developer of the AI chatbot, Replika. We have reported recently on updates in this area too.
Other European data protection authorities have announced their own investigations into ChatGPT, including the following:
What next?
The urgency of these interventions and attention of the regulators reflects the widespread use of ChatGPT, and given the thought-provoking capabilities of this and other generative AI tools, it is not surprising that people are using tools such as ChatGPT without considering the associated data protection risks. Unfortunately, some of the adverse consequences arising from such use will only be encountered through trial and error and the associated publicity that this generates.
In the meantime, we will continue to report on developments in this area, in particular steps taken by data protection authorities to regulate this space.
add to folder:
If you would like to learn how Lexology can drive your content marketing strategy forward, please email [email protected].
Regulation (EU) 2016/679 – General Data Protection Regulation (GDPR)
© Copyright 2006 – 2023 Law Business Research