The New York Times is suing Microsoft and OpenAI for billions. Here's what it claims their AI bots are up to – ABC News

The New York Times is suing Microsoft and OpenAI for billions. Here's what it claims their AI bots are up to
The New York Times is suing Microsoft and OpenAI, the creator of ChatGPT, claiming millions of its news articles have been misused by the tech companies to train their AI-powered chatbots.
It's the first time one of America's big traditional media companies has taken on the new technology in court. And it sets up a showdown over the increasingly contentious use of copyrighted content to fuel artificial intelligence software.
The legal complaint, which demands a jury trial in a New York district court, says the bots' creators have refused to recognise copyright protections afforded by legislation and the US Constitution. It says the bots, including those incorporated into Microsoft products like its Bing search engine, have repurposed the Times's content to compete with it.
"Times journalists go where the story is, often at great risk and cost, to inform the public about important and pressing issues," the Times's complaint argues.
"Their essential work is made possible through the efforts of a large and expensive organization that provides legal, security, and operational support.
"Defendants' unlawful use of The Times's work to create artificial intelligence products that compete with it threatens The Times's ability to provide that service."
The Times wants the court to hold Microsoft and OpenAI responsible "for the billions of dollars in statutory and actual damages that they owe". It's also requested the "destruction" of parts of the chatbots that incorporate Times content.
OpenAI told the ABC it respected content creators' rights and wanted to work with them to help them benefit from AI.
"Our ongoing conversations with the New York Times have been productive and moving forward constructively, so we are surprised and disappointed with this development," an OpenAI spokesperson said.
The ABC has also contacted Microsoft for comment.
Here's some of what the Times alleges the two companies are doing:
The complaint cites multiple examples of ChatGPT reciting slabs of New York Times journalism almost completely word-for-word.
One example pointed to in court documents is a five-part investigation into New York City's taxi industry, published in 2019.
The Times says its investigation included 600 interviews, more than 100 requests for records, and the reviewing of thousands of pages of documents such as bank records.
"OpenAI had no role in the creation of this content, yet with minimal prompting, will recite large portions of it verbatim," the complaint says.
The complaint points to OpenAI's shift from its formation as a "non-profit artificial intelligence company" in 2015 to becoming a multi-billion-dollar for-profit business, "built in large part on the unlicensed exploitation of copyrighted works belonging to The Times and others".
The Times complaint also argues that "making great journalism is harder than ever", with business models collapsing across the industry over the past 20 years, forcing many newspapers across the US to close:
"If The Times and its peers cannot control the use of their content, their ability to monetize that content will be harmed. With less revenue, news organizations will have fewer journalists able to dedicate time and resources to important, in-depth stories, which creates a risk that those stories will go untold. Less journalism will be produced, and the cost to society will be enormous."
Like many newspapers, the New York Times uses a paywall on its website to restrict access to paid subscribers.
The company's complaint says that ChatGPT has been trained to memorise copies of its paywalled articles, which can then be served up to users who request them.
The court documents lodged by The Times include an example of a prompt, typed into ChatGPT, from a user "paywalled out" of a feature about a group of skiers that were caught in an avalanche.
In response to the prompt, ChatGPT reproduces parts of the story.
The complaint notes that some of the text provided by ChatGPT was not included in the original article.
Another example relates to a review of a restaurant, by Times critic Pete Wells, that went viral in 2012.
The New York Times runs a consumer product review website, Wirecutter, with product recommendations that are based on "tens of thousands of hours conducting rigorous testing and research".
Much of Wirecutter's income comes from users clicking affiliate referral links, with retailers giving the company a commission for referring a buyer. It doesn't receive that commission if the buyer comes to the retailer via a chatbot.
"A user who already knows Wirecutter's recommendations for the best cordless stick vacuum, and the basis for those recommendations, has little reason to visit the original Wirecutter article and click on the links within its site," the complaint argues.
But the Times says the bots are also falsely telling users that Wirecutter has recommended certain products.
It says, for example, a query about Wirecutter's recommendations for the best office chair correctly copied the website's top four recommendations – then added two others, which never appeared in the Wirecutter list.
"Users rely on Wirecutter for high-quality, well-researched recommendations, and Wirecutter's brand is damaged by incidents that erode consumer trust and fuel a perception that Wirecutter's recommendations are unreliable," the complaint says.
The Times says the bots are producing what are known in AI-speak as "hallucinations" – that is, making things up – in response to requests for New York Times's content.
It says this is "causing The Times commercial and competitive injury by misattributing content to The Times that it did not, in fact, publish."
It gives examples such as:
ChatGPT was supposed to prove technology could be altruistic. But recent events show how fraught that mission is in a place like Silicon Valley.
"Instead of saying, 'I don't know,' Defendants' GPT models will confidently provide information that is, at best, not quite accurate and, at worst, demonstrably (but not recognizably) false," the Times's complaint says.
It says this is "causing The Times commercial and competitive injury", and "it's misinformation".
The use of copyrighted material to train AI-bots has now attracted multiple lawsuits.
High-profile American writers like John Grisham, Stephen King and Sarah Silverman are among those suing OpenAI for copyright breaches.
The Australian Society of Authors has raised concerns about the use of Australian writers' works too, but says many cannot afford to sue.
The society has been lobbying for government action.
The Times says it has a history of working productively with tech giants like Google, Meta (Facebook) and Apple. But its months-long attempts to reach an agreement with Microsoft and OpenAI have not led to a resolution, it says.
We acknowledge Aboriginal and Torres Strait Islander peoples as the First Australians and Traditional Custodians of the lands where we live, learn, and work.
This service may include material from Agence France-Presse (AFP), APTN, Reuters, AAP, CNN and the BBC World Service which is copyright and cannot be reproduced.
AEST = Australian Eastern Standard Time which is 10 hours ahead of GMT (Greenwich Mean Time)

source

Jesse
https://playwithchatgtp.com