ChatGPT Lawyers Are Ordered to Consider Seeking Forgiveness – The New York Times

Advertisement
Supported by
Steven A. Schwartz and Peter LoDuca must pay a fine and send letters to judges named in a brief filled with fiction, a judge ordered.

A Manhattan judge on Thursday imposed a $5,000 fine on two lawyers who gave him a legal brief full of made-up cases and citations, all generated by the artificial intelligence program ChatGPT.
The judge, P. Kevin Castel of Federal District Court, criticized the lawyers harshly and ordered them to send a copy of his opinion to each of the real-life judges whose names appeared in the fictitious filing.
But Judge Castel wrote that he would not require the lawyers, Steven A. Schwartz and Peter LoDuca, whom he referred to as respondents, to apologize to those judges, “because a compelled apology is not a sincere apology.”
“Any decision to apologize is left to respondents,” the judge added.
The discovery that ChatGPT had helped create the brief in an otherwise unremarkable lawsuit reverberated throughout the legal profession. The revelation also riveted the tech community, which has been debating the dangers of overreliance on artificial intelligence — even as a existential threat to humanity.
In the case involving Mr. Schwartz and Mr. LoDuca, Judge Castel made it clear they had violated a basic precept of the American legal system.
“Many harms flow from the submission of fake opinions,” the judge wrote. “The opposing party wastes time and money in exposing the deception. The court’s time is taken from other important endeavors.”
The lawyers’ action, he added, “promotes cynicism about the legal profession and the American judicial system. And a future litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity.”
Thursday’s ruling followed a June 8 hearing at which Judge Castel grilled Mr. Schwartz and Mr. LoDuca about how they came to file the brief. In the suit, their client, Roberto Mata, sought to hold the airline Avianca responsible for an injury he says he sustained when a metal serving cart hit his knee during an August 2019 flight from El Salvador to New York.
After Avianca asked to dismiss the suit because the statute of limitations had expired, Mr. Schwartz prepared a 10-page brief citing more than a half-dozen court decisions with names like Martinez v. Delta Air Lines, Varghese v. China Southern Airlines and Zicherman v. Korean Air Lines, to argue that the litigation should be allowed to proceed.
Because Mr. Schwartz was not admitted to practice in federal court in Manhattan, his partner, Mr. LoDuca, became the lawyer of record and signed the brief, which was filed on March. 1.
Two weeks later, Avianca’s lawyers, Bart Banino and Marissa Lefland, replied that they were “unable to locate most of the case law” cited in the brief.
When the judge then gave Mr. LoDuca a week to produce the cases mentioned, Mr. LoDuca responded that he was on vacation, and asked for another week. The judge agreed.
At the hearing on June 8, Mr. LoDuca admitted that he had not been on vacation but, because Mr. Schwartz was away, he wanted to give his colleague more time.
“The lie had the intended effect of concealing Mr. Schwartz’s role,” Judge Castel wrote.
In his opinion, Judge Castel examined the supposed decisions, demonstrating how they were clearly fabricated, and said the lawyers had acted in bad faith by submitting them.
The purported Varghese opinion was said to have been issued by a three-judge panel of the U.S. Court of Appeals for the 11th Circuit. But, Judge Castel noted, one of the judges the opinion listed actually sat on the federal appeals court for the 5th Circuit.
Beyond that, “the ‘Varghese’ decision shows stylistic and reasoning flaws that do not generally appear in decisions issued by United States Courts of Appeals,” the judge said.
“Its legal analysis is gibberish,” he wrote, adding, “The summary of the case’s procedural history is difficult to follow and borders on nonsensical.”
He said Mr. Mata’s lawyers had abandoned their responsibilities, “then continued to stand by the fake opinions after judicial orders called their existence into question.”
Had the matter ended with the lawyers “coming clean” earlier, the judge continued, “the record now would look quite different.”
The judge noted that the lawyers’ firm, Levidow, Levidow & Oberman, had arranged for outside lawyers to conduct a mandatory training program on technological competence and artificial intelligence programs. And he credited the lawyers’ descriptions of their embarrassment and remorse in the face of widespread publicity about their actions.
The Levidow firm said in a statement that it had reviewed the judge’s order and “fully intend to comply with it.” But the firm disagreed with Judge Castel’s finding that anyone at the firm acted in bad faith.
“In the face of what even the court acknowledged was an unprecedented situation, we made a good faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth,” the firm said.
The firm said it was considering its options and had made no decision as to whether to appeal.
Ronald Minkoff, a lawyer for the firm and for Mr. Schwartz, declined to comment on Mr. Schwartz’s behalf. A lawyer for Mr. LoDuca did not respond to a request for comment.
In a separate ruling on Thursday, Judge Castel dismissed Mr. Mata’s lawsuit against Avianca on the statute-of-limitations grounds the airline had argued.
Mr. Banino, a lawyer for the airline, said, “Putting aside the lawyer’s use of ChatGPT and submission of fake cases, the court reached the right conclusion in dismissing the underlying case.”
In his ruling, the judge did not refer the lawyers for potential disciplinary action, but the disciplinary authorities could start their own investigation. Such inquiries can lead to a private reprimand or to public sanctions like suspension or disbarment.
Stephen Gillers, a legal ethics professor at New York University School of Law, said he believed the worldwide publicity about the case helped Mr. Schwartz and Mr. LoDuca avoid a worse fate.
“The lawyers will now and forever be known as ‘the lawyers who got fooled by ChatGPT,’ which Castel says is also a sanction,” Professor Gillers said. “The case is the first, but not likely the last, warning to the bar not to get seduced by the siren call of generative A.I.”
Because of an editing error, an earlier version of the capsule summary with this article misstated the given name of a lawyer involved in the case. As the article correctly notes, he is Peter LoDuca, not Paul.
How we handle corrections
Benjamin Weiser is a reporter covering the Manhattan federal courts. He has long covered criminal justice, both as a beat and investigative reporter. Before joining The Times in 1997, he worked at The Washington Post. @BenWeiserNYT
Advertisement

source

Jesse
https://playwithchatgtp.com