Journal of Free Speech Law: "Bots Behaving Badly: A Products Liability Approach to Chatbot-Generated Defamation," – Reason
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Free Speech
|
The article is here [UPDATE: link fixed]; the Introduction:
Within two months of its launch, ChatGPT became the fastest-growing consumer application in history with more than 100 million monthly active users. Created by OpenAI, a private company backed by Microsoft, ChatGPT is just one of several sophisticated chatbots made available to the public in late 2022. These large language models generate human-like responses to user prompts based on information they have “learned” during a training process. Ask ChatGPT to explain the concept of quantum physics and it synthesizes the subject into six readable paragraphs. Prompt it with an inquiry about the biggest scandal in baseball history and it describes the Black Sox Scandal of 1919. This is a tool that can respond to an incredible variety of content creation requests ranging from academic papers to language translations, explanations of complicated math problems, and telling jokes. But it is not without risk. It is also capable of generating speech that causes harm, such as defamation.
Although some safeguards are in place, there already exist documented examples of ChatGPT creating defamatory speech. And this should not come as a surprise—if something is capable of speech, it is capable of false speech that sometimes causes reputational harm. Of course, artificial intelligence (AI) tools have caused speech harms before. Amazon’s Alexa device—touted as a virtual assistant that can make your life easier—has on occasion gone rogue: It has made violent statements to users, and even suggested they engage in harmful acts. Google search’s autocomplete function has fueled defamation lawsuits arising from suggested words such as “rapist,” “fraud,” and “scam.” An app called SimSimi has notoriously perpetuated cyberbullying and defamation. Tay, a chatbot launched by Microsoft, caused controversy when just hours after its launch it began to post inflammatory and offensive messages. So the question isn’t whether these tools can cause harm. It’s when they do cause harm, who—if anyone—is legally responsible?
The answer is not straightforward, in part because in each example of harm listed above, humans were not responsible—at least not directly—for the problematic speech. Instead, the speech was produced by automated AI programs that were designed to generate output based on various inputs. Although the AI was written by humans, the chatbots were designed to collect information and data in order to generate their own content. In other words, a human was not pulling levers behind a curtain; the human had taught the chatbot how to pull the levers on its own.
As the use of AI for content generation becomes more prevalent, it raises questions about how to assign fault and responsibility for defamatory statements made by these machines. With the projected continued growth of AI applications that generate content, it is critical to develop a clear framework of how potential liability would be assigned. This will spur continued growth and innovation in this area and ensure that proper consideration is given to preventing speech harms in the first instance.
The default assumption may be that someone who is defamed by an AI chatbot would have a case for defamation. But there are hurdles in applying defamation law to speech generated by a chatbot, particularly because defamation law requires assessing mens rea that will be difficult to assign to a chatbot (or its developers). This article evaluates the challenges of applying defamation law to chatbots. Section I discusses the technology behind chatbots and how it operates, and why it is qualitatively different from earlier forms of AI. Section II examines the challenges that arise in assigning liability under traditional defamation law when a chatbot publishes defamatory speech. Sections III and IV suggest that products liability law might offer a solution—either as an alternative theory of liability or as a framework for assessing fault in a defamation action. After all, products liability law is well-suited to address who is at fault when a product causes injury, includes mechanisms for assessing the fault of product designers and manufacturers, and easily adapts to emerging technologies because of its broad theories of liability.
To get the Volokh Conspiracy Daily e-mail, please sign up here.
Today in Supreme Court History: August 11, 1942
is the Gary T. Schwartz Distinguished Professor of Law at UCLA. Naturally, his posts here (like the opinions of the other bloggers) are his own, and not endorsed by any educational institution.
Show Comments (7)
|
|
|
|
|
© 2023 Reason Foundation | Accessibility | Privacy Policy | Terms Of Use
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.