Lawyer Who Used ChatGPT Faces Penalty for Made Up Citations – The New York Times

Advertisement
Supported by
In a cringe-inducing court hearing, a lawyer who relied on A.I. to craft a motion full of made-up case law said he “did not comprehend” that the chat bot could lead him astray.
Benjamin Weiser and
As the court hearing in Manhattan began, the lawyer, Steven A. Schwartz, appeared nervously upbeat, grinning while talking with his legal team. Nearly two hours later, Mr. Schwartz sat slumped, his shoulders drooping and his head rising barely above the back of his chair.
For nearly two hours Thursday, Mr. Schwartz was grilled by a judge in a hearing ordered after the disclosure that the lawyer had created a legal brief for a case in Federal District Court that was filled with fake judicial opinions and legal citations, all generated by ChatGPT. The judge, P. Kevin Castel, said he would now consider whether to impose sanctions on Mr. Schwartz and his partner, Peter LoDuca, whose name was on the brief.
At times during the hearing, Mr. Schwartz squeezed his eyes shut and rubbed his forehead with his left hand. He stammered and his voice dropped. He repeatedly tried to explain why he did not conduct further research into the cases that ChatGPT had provided to him.
“God, I wish I did that, and I didn’t do it,” Mr. Schwartz said, adding that he felt embarrassed, humiliated and deeply remorseful.
“I did not comprehend that ChatGPT could fabricate cases,” he told Judge Castel.
In contrast to Mr. Schwartz’s contrite postures, Judge Castel gesticulated often in exasperation, his voice rising as he asked pointed questions. Repeatedly, the judge lifted both arms in the air, palms up, while asking Mr. Schwartz why he did not better check his work.
As Mr. Schwartz answered the judge’s questions, the reaction in the courtroom, crammed with close to 70 people who included lawyers, law students, law clerks and professors, rippled across the benches. There were gasps, giggles and sighs. Spectators grimaced, darted their eyes around, chewed on pens.
“I continued to be duped by ChatGPT. It’s embarrassing,” Mr. Schwartz said.
An onlooker let out a soft, descending whistle.
The episode, which arose in an otherwise obscure lawsuit, has riveted the tech world, where there has been a growing debate about the dangers — even an existential threat to humanity — posed by artificial intelligence. It has also transfixed lawyers and judges.
“This case has reverberated throughout the entire legal profession,” said David Lat, a legal commentator. “It is a little bit like looking at a car wreck.”
The case involved a man named Roberto Mata, who had sued the airline Avianca claiming he was injured when a metal serving cart struck his knee during an August 2019 flight from El Salvador to New York.
Avianca asked Judge Castel to dismiss the lawsuit because the statute of limitations had expired. Mr. Mata’s lawyers responded with a 10-page brief citing more than half a dozen court decisions, with names like Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and Varghese v. China Southern Airlines, in support of their argument that the suit should be allowed to proceed.
After Avianca’s lawyers could not locate the cases, Judge Castel ordered Mr. Mata’s lawyers to provide copies. They submitted a compendium of decisions.
It turned out the cases were not real.
Mr. Schwartz, who has practiced law in New York for 30 years, said in a declaration filed with the judge this week that he had learned about ChatGPT from his college-aged children and from articles, but that he had never used it professionally.
He told Judge Castel on Thursday that he had believed ChatGPT had greater reach than standard databases.
“I heard about this new site, which I falsely assumed was, like, a super search engine,” Mr. Schwartz said.
Programs like ChatGPT and other large language models in fact produce realistic responses by analyzing which fragments of text should follow other sequences, based on a statistical model that has ingested billions of examples pulled from all over the internet.
Irina Raicu, who directs the internet ethics program at Santa Clara University, said this week that the Avianca case clearly showed what critics of such models have been saying, “which is that the vast majority of people who are playing with them and using them don’t really understand what they are and how they work, and in particular what their limitations are.”
Rebecca Roiphe, a New York Law School professor who studies the legal profession, said the imbroglio has fueled a discussion about how chatbots can be incorporated responsibly into the practice of law.
“This case has changed the urgency of it,” Professor Roiphe said. “There’s a sense that this is not something that we can mull over in an academic way. It’s something that has affected us right now and has to be addressed.”
The worldwide publicity spawned by the episode should serve as a warning, said Stephen Gillers, who teaches ethics at New York University School of Law. “Paradoxically, this event has an unintended silver lining in the form of deterrence,” he said.
There was no silver lining in courtroom 11-D on Thursday. At one point, Judge Castel questioned Mr. Schwartz about one of the fake opinions, reading a few lines aloud.
“Can we agree that’s legal gibberish?” Judge Castel said.
After Avianca had the case moved into the federal court, where Mr. Schwartz is not admitted to practice, Mr. LoDuca, his partner at Levidow, Levidow & Oberman, became the attorney of record.
In an affidavit last month, Mr. LoDuca told Judge Castel that he had no role in conducting the research. Judge Castel questioned Mr. LoDuca on Thursday about a document filed under his name asking that the lawsuit not be dismissed.
“Did you read any of the cases cited?” Judge Castel asked.
“No,” Mr. LoDuca replied.
“Did you do anything to ensure that those cases existed?”
No again.
Lawyers for Mr. Schwartz and Mr. LoDuca asked the judge not to punish their clients, saying the lawyers had taken responsibility and there was no intentional misconduct.
In the declaration Mr. Schwartz filed this week, he described how he had posed questions to ChatGPT, and each time it seemed to help with genuine case citations. He attached a printout of his colloquy with the bot, which shows it tossing out words like “sure” and “certainly!”
After one response, ChatGPT said cheerily, “I hope that helps!”
Benjamin Weiser is a reporter covering the Manhattan federal courts. He has long covered criminal justice, both as a beat and investigative reporter. Before joining The Times in 1997, he worked at The Washington Post. @BenWeiserNYT
Advertisement

source

Jesse
https://playwithchatgtp.com