How Artificial Intelligence Is Making Its Way Into the Legal System – The Marshall Project

This is The Marshall Project’s Closing Argument newsletter, a weekly deep dive into a key criminal justice issue. Want this delivered to your inbox? Subscribe to future newsletters here.
As criminal justice journalists, my colleagues and I read a fair amount of legal filings.
Historically, if I came across a citation in a filing — say, “Bourguignon v. Coordinated Behavioral Health Servs., Inc., 114 A.D.3d 947 (3d Dep’t 2014)” — I could be reasonably sure the case existed, even if, perhaps, the filing misstated its significance.
Artificial intelligence is making that less certain. The example above is a fake case invented by the AI chatbot ChatGPT. But the citation was included in a real medical malpractice suit against a New York doctor, and last week, the Second Circuit Court of Appeals upheld sanctions against Jae S. Lee, the lawyer who filed the suit.
These kinds of “hallucinations” are not uncommon for large language model AI, which composes text by calculating which word is likely to come next, based on the text it has seen before. Lee isn’t the first lawyer to get in trouble for including such a hallucination in a court filing. Others in Colorado and New Yorkincluding one-time Donald Trump attorney Michael Cohen — have also been burned by presumably not checking the AI’s work. In response, the Fifth Circuit Court of Appeals proposed new rules last year that would require litigants to certify that any AI-generated text was reviewed for accuracy. Professional law organizations have issued similar guidance.
There’s no evidence that a majority of lawyers are using AI in this manner, but pretty soon, most will be using it in one way or another. The American Lawyer, a legal trade magazine, recently asked 100 large law firms if they were using generative AI in their day-to-day business, and 41 firms replied yes — most commonly for summarizing documents, creating transcripts and performing legal research. Proponents argue that the productivity gains will mean clients get more services for less time and money.
Similarly, some see the rise of AI lawyering as a potential boon to justice access, and imagine a world where the technology can help public interest lawyers serve more clients. As we examined in a previous Closing Argument, access to lawyers in the U.S. is often scarce. About 80% of criminal defendants can’t afford to hire a lawyer, by some estimates, and 92% of the civil legal problems that low-income Americans face go completely or mostly unaddressed, according to a study by the Legal Services Corporation.
The California Innocence Project, a law clinic at the California Western School of Law that works to overturn wrongful convictions, is using an AI legal assistant called CoCounsel to identify patterns in documents, such as inconsistencies in witness statements. “We are spending a lot of our resources and time trying to figure out which cases deserve investigation,” former managing attorney Michael Semanchik told the American Bar Association Journal. “If AI can just tell me which ones to focus on, we can focus on the investigation and litigation of getting people out of prison.”
But the new technology also presents myriad opportunities for things to go wrong, beyond embarrassing lawyers who try to pass off AI-generated work as their own. One major issue is confidentiality. What happens when a client provides information to a lawyer’s chatbot, instead of the lawyer? Is that information still protected by the secrecy of attorney-client privilege? What happens if a lawyer enters a client’s personal information into an AI-tool that is simultaneously training itself on that information? Could the right prompt by an opposing lawyer using the same tool serve to hand that information over?
These questions are largely theoretical now, and the answers may have to play out in courts as the technology becomes more common. Another ever-present concern with all AI — not just in law — is that bias baked into the data used to train AI will express itself in the text that large language models produce.
While some lawyers are looking to AI to assist their practices, there are also tech entrepreneurs looking to replace attorneys in certain settings. In the most well-known case, the legal service DoNotPay briefly flirted with the idea of its AI robot lawyer arguing a case in a live courtroom (by feeding lines to a human wearing an earbud) before backing out over alleged legal threats.
DoNotPay started in 2015, offering clients legal templates to fight parking tickets and file simple civil suits, and still mostly offers services in this realm, rather than the showy specter of robot lawyers arguing in court. But even the automation of these seemingly humdrum aspects of law could have dramatic consequences on the legal system.
Writing for Wired Magazine last summer, Keith Porcaro concluded that AI lawyers could wind up democratizing law and making legal services available to people who otherwise wouldn’t have access, while simultaneously ​helping powerful people to “use the legal system as a cudgel.”
He notes that if AI makes it easier for debt collectors to seek wage garnishments and file evictions, it could unleash a wave of default judgments against poor people who fail to show up in court. And even if, as a counterbalance, AI becomes a tool to help ordinary people defend themselves from predatory cases, the resulting torrent of legal disputes could grind the current court system to a halt. “Nearly every application of large language models in courts becomes a volume problem that courts aren’t equipped to handle,” Porcaro writes.
Then again, maybe not. While it’s still far off, the American Bar Association has wondered whether AI, in this brave new legal world, might best serve in the role of judge, rendering an “impartial, ‘quick-and-dirty’ resolution for those who simply need to move on, and move on quickly.”
We’ll never put our work behind a paywall, and we’ll never put a limit on the number of articles you can read. Our ability to take on big, groundbreaking investigations — the kind that can lead to real impact — doesn’t depend on advertisers or corporate owners. It depends on people like you. Our independence is our strength, and your donation makes us stronger.
Jamiles Lartey Twitter Email is a New Orleans-based staff writer for The Marshall Project. Previously, he worked as a reporter for the Guardian covering issues of criminal justice, race and policing. Jamiles was a member of the team behind the award-winning online database “The Counted,” tracking police violence in 2015 and 2016. In 2016, he was named “Michael J. Feeney Emerging Journalist of the Year” by the National Association of Black Journalists.

source

Jesse
https://playwithchatgtp.com