ChatGPT, other AI create mistakes in court cases, legal filings – Quartz

A New York lawyer is facing possible disciplinary action for citing a fake case in court papers generated by AI, the latest hiccup for attorneys and courts in navigating the evolving technology.
Jae Lee used ChatGPT to perform research for a medical malpractice lawsuit, but she acknowledged in a brief to the US Court of Appeals for the Second Circuit that she did not double-check the chatbot’s results to confirm the validity of the non-existent decision she cited.
Lee told Reuters that she is “committed to adhering to the highest professional standards and to addressing this matter with the seriousness it deserves.”
Generative AI models that power chatbots are known to “hallucinate,” or provide inaccurate information or made-up details. This process of filling in the blanks is necessary to provide ChatGPT’s creative responses, but problems can arise when the AI fabricates details—especially those that can have legal consequences.
A federal appeals court in New Orleans has proposed requiring lawyers to certify that they either did not rely on AI tools to draft briefs or that humans reviewed the accuracy of any text generated by AI in their court filings. Lawyers who don’t comply with the rule could have their filings being stricken or face sanctions. Some attorneys have pushed back on the proposed rule.
Check out the slideshow above for three other times fake AI-generated citations have surfaced in court cases in recent years — whether intentionally or not.
A radio host from Georgia named Mark Walters claimed last year that ChatGPT generated a false legal complaint accusing him of embezzling money. Walters said the chatbot provided the false complaint to Fred Riehl, the editor-in-chief of the gun publication AmmoLand, who was reporting on a real-life legal case playing out in Washington state.
According to Riehl’s attorney, Riehl provided ChatGPT with the correct link to the court case and entered the following prompt into the chatbot: “Can you read this and in a bulleted list summarize the different accusations or complaint against the defendant.”
“By sending the allegations to Riehl, [OpenAI] published libelous matter regarding Walters,” the lawsuit reads.
Michael Cohen, Donald Trump’s former lawyer, said he mistakenly passed along fake AI-produced legal case citations to his attorney that were used in a motion submitted to a federal judge.
The cases were cited as part of written arguments by Cohen’s attorney David M. Schwartz, which were made to try to bring an early end to Cohen’s court supervision now that he is out of prison. In 2018, Cohen pleaded guilty to tax evasion, campaign finance charges, and lying to Congress.
Cohen admitted that he had “not kept up with emerging trends (and related risks) in legal technology and did not realize that Google Bard was a generative text service that, like Chat-GPT, could show citations and descriptions that looked real but actually were not.”
“Instead, I understood it to be a super-charged search engine and had repeatedly used it in other contexts to (successfully) find accurate information online,” he added.
In October, Grammy-winning artist Pras Michel blamed his now former lawyer for AI use in his trial after being convicted of illegal foreign lobbying. Michel said the lawyer performed poorly and used AI in his closing remarks at the end of the trial.
But David Kenner, lawyer, defends the use of AI in criminal trials. Michel’s defense team used a generative AI program from EyeLevel.AI to supplement its legal research.
Kenner acknowledged that his use of generative AI for the closing argument caused him to misattribute lyrics from a Puff Daddy song to the Fugees. “I messed up,” he said.

source

Jesse
https://playwithchatgtp.com