Lawyer cites fake cases generated by ChatGPT in legal brief – Legal Dive

Let Legal Dive’s free newsletter keep you informed, straight from your inbox.

The high-profile incident in a federal case highlights the need for lawyers to verify the legal insights generated by AI-powered tools.
A New York lawyer cited fake cases generated by ChatGPT in a legal brief filed in federal court and may face sanctions as a result, according to news reports.
The incident involving OpenAI’s chatbot took place in a personal injury lawsuit filed by a man named Roberto Mata against Colombian airline Avianca pending in the Southern District of New York.
Steven A. Schwartz of Levidow, Levidow & Oberman, one of the plaintiff’s attorneys, wrote in an affidavit that he consulted ChatGPT to supplement legal research he performed when preparing a response to Avianca’s motion to dismiss.
However, Judge P. Kevin Castel wrote in an early May order regarding the plaintiff’s filing that “six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.” He called it “an unprecedented circumstance.”
In his affidavit filed later in May, Schwartz said that ChatGPT not only provided the legal sources, but assured him of the reliability of the opinions and citation that the court has called into question.
For example, a document attached to his affidavit indicates he asked the generative AI-powered ChatGPT if one of the six cases the judge has called bogus was real and the chatbot responded that it was.
Additionally, he asked ChatGPT if the other cases provided were fake. The chatbot responded that they were also real and “can be found in reputable legal databases such as LexisNexis and Westlaw.”
Schwartz acknowledged in the affidavit that his source for the legal opinions “has revealed itself to be unreliable.”
Schwartz’s mistakes resulted in a story published on the front page of The New York Times, and a judge has scheduled a hearing in the coming weeks to determine possible sanctions.
Legal professionals have said Schwartz’s actions should serve as a cautionary tale for attorneys using AI-powered technology, but should not prompt them to abandon all legal industry use cases of artificial intelligence.
Legal professionals said the very basic mistake Schwartz made was his failure to check whether the cases produced by ChatGPT during his research were authentic.
Kay Pang, an experienced in-house counsel who works at VMware, wrote on LinkedIn that the lawyer involved “didn’t follow the most important rule — Verify, verify, verify!”
Nicola Shaver, the CEO and co-founder of Legaltech Hub, wrote on LinkedIn the attorney needed to conduct “independent verification” rather than simply asking ChatGPT if the cases were real. 
In a recent Lawtrades webinar, OpenAI Associate General Counsel Ashley Pantuliano had called ChatGPT a helpful starting point for legal research. But she warned that it can sometimes produce inaccurate information in response to prompts, so attorneys using the tool should be aware of that.
“As a lawyer, you don’t want to be wrong on what the law is or anything like that,” Pantuliano said during the webinar.
Schwartz admitted in his affidavit in the Avianca case that he had never previously used ChatGPT for conducting legal research and “therefore was unaware of the possibility that its contents could be false.”
He also wrote that he “greatly regrets having utilized artificial intelligence to supplement the legal research” he performed and “will never do so in the future without absolute verification of its authenticity.”
Schwartz did not respond to a request for comment Tuesday morning.
Alex Su, the head of community development at Ironclad, wrote on Substack that he feared the biggest takeaway most lawyers would have from Schwartz’s mistakes is “that they should never trust AI.”
He said this mindset would be a mistake for several reasons, including that ChatGPT is not synonymous with AI nor is it the same as all legal tools powered by artificial intelligence. 
Su highlighted that there are companies with a history of making legal customers successful who offer AI-powered legal tech tools.
“Now that doesn’t mean that their generative AI products will be 100% reliable, of course,” Su wrote. “But vendors will be incentivized to warn users and speak candidly about their accuracy rates, which should far exceed ChatGPT’s—at least for law related use cases.”
He encouraged lawyers to approach AI with a “learning mindset” and remember it is most effective as a “first pass or first draft tool.”
Shaver also said attorneys should not “throw the baby out with the bathwater” when it comes to AI and should recognize they “will be using this technology no matter what anyway.”
Educating yourself now is the best way to pave the way for your future,” Shaver wrote on LinkedIn. “Be aware of what products are building this tech into legal research workflows, and how they are tuning their solutions to provide reliable responses.”
Get the free daily newsletter read by industry experts
The chances that AI prompts might output proprietary code are very high, opening the door to troll armies looking for businesses to sue.
Given increased government scrutiny, employers need to be mindful of the time periods noncompetes cover and review state-specific requirements, a Baker McKenzie partner said.
Keep up with the story. Subscribe to the Legal Dive free daily newsletter
Keep up with the story. Subscribe to the Legal Dive free daily newsletter
Subscribe to Legal Dive for top news, trends & analysis
Get the free daily newsletter read by industry experts
The chances that AI prompts might output proprietary code are very high, opening the door to troll armies looking for businesses to sue.
Given increased government scrutiny, employers need to be mindful of the time periods noncompetes cover and review state-specific requirements, a Baker McKenzie partner said.
The free newsletter covering the top industry headlines

source

Jesse
https://playwithchatgtp.com