ChatGPT has wrecked legal cases, but can lawyers use AI ethically? – The Capital Times

Wisconsin lawyers are looking for ways to ethically use ChatGPT and other AI tools.
Aviva Kaiser, ethics counsel for the State Bar of Wisconsin, says the association considers its pre-existing rules of conduct sufficient to govern how state-licensed attorneys should use ChatGPT in legal work.
Walter Zimmerman, a Waukesha-based technology attorney, cautions that AI content is designed to look smart and readable but the accuracy of the content may not match.
Technology and law professor in the University of Wisconsin Law School, BJ Ard is pictured at the University of Wisconsin-Madison.

State government and disinformation reporter
One man hired a law firm to sue an airline after a metal concessions cart crashed into him on an airplane.
Another man sought legal help to fight a huge payment he was ordered to make for defaulting on a car loan.
Their lawyers thought there was a quick and easy way to research similar cases to form their legal motions: Ask ChatGPT.
That turned out to be a costly mistake when the artificial intelligence tool churned out a slew of cases for each lawyer that never actually happened.
What followed was an object lesson in using AI chatbots without understanding how they work, and while skipping precautions built into legal codes of ethics. Those cases happened in New York and Colorado, but the State Bar of Wisconsin is concerned enough about the potential misuse of artificial intelligence that it has begun training lawyers in the state on the pitfalls to avoid.
The State Bar isn’t aware of any Wisconsin attorneys misusing the chatbots to research and write their court motions, according to Aviva Kaiser, ethics counsel for the association, which has over 25,000 members. But, Kaiser said, it’s important to teach people in the legal field how to ethically use emerging technology like ChatGPT and the risks of improper use.
“Unfortunately, we are at that stage where we’re not really sure about all the risks and how to deal with them,” Kaiser said. “We need to exercise great caution.”
Meanwhile, the Wisconsin State Public Defender’s Office is looking for ways that criminal defense lawyers can ethically use ChatGPT and other AI to more quickly work through massive caseloads that have clogged the courts and left defendants and victims waiting years for rulings.
“There are, of course, workload issues just systemically. And we’re always looking for ways to increase efficiency and give attorneys all the tools they need to provide zealous representation,” said Adam Plotkin, a spokesperson for the state Public Defender’s Office. “But I think the downside is efficiency doesn’t always equal effectiveness.”
Such is the legal profession’s own dilemma with the rapidly developing technology that has raised concerns across various fields, from cheating in schools, to plagiarizing in work products, to perpetuating disinformation. ChatGPT and similar tools can almost instantly accomplish work that would take humans days to finish, but the AI is built to assemble information in a readable format — not to prioritize accuracy.
That’s what the lawyers in Colorado and New York didn’t appear to understand.
Wisconsin lawyers are looking for ways to ethically use ChatGPT and other AI tools.
In both cases gaining national attention, the attorneys separately asked the AI bot to gather and compile other cases similar to their own — and in both instances, ChatGPT fabricated legal cases that the lawyers then cited in their arguments. The discovery of the fake citations resulted in tossed legal work, sanctions and fines for the attorneys. 
In one of the cases — the lawsuit involving a client suing an airline, claiming injury on a flight — New York attorney Steven Schwartz used the popular artificial intelligence tool powered by OpenAI to gather and cite personal injury suits similar to his own. Several of the cases that ChatGPT gathered and cited in the final filing were made up.
Schwartz told the court in a June 8 filing that he “did not understand it was not a search engine, but a generative language-processing tool.”
New York-based federal Judge P. Kevin Castel levied sanctions against the Schwartz legal team and a $5,000 fine, despite Schwartz submitting an affidavit admitting he used ChatGPT to research and craft the filing on behalf of his client but stating that he had no intention of deceiving the court and swearing he didn’t know the cases were made up.
In arguing against potential sanctions, Schwartz’ legal team noted that Schwartz and his firm had “already become the poster children for the perils of dabbling with new technology; their lesson has been learned.”
Castel issued the sanctions June 22. The case was ultimately ruled against Schwartz’ client in July.
Also this spring, Colorado Springs attorney Zachariah Crabill was arguing his first civil litigation case in defense of a client accused of breaching a car payment contract when he used ChatGPT to gather cases to support his argument. It was later discovered that the AI tool had made up many of these cases when the citations couldn’t be found or verified by legal database LexisNexis.
As of mid-June, the firm Baker Law Group no longer listed Crabill on its staff page. Crabill officially withdrew from the car payment contract case on June 9.
Aviva Kaiser, ethics counsel for the State Bar of Wisconsin, says the association considers its pre-existing rules of conduct sufficient to govern how state-licensed attorneys should use ChatGPT in legal work.
In the eyes of some legal practitioners, the generative AI tool can be useful to complete preliminary research so long as the work is double-checked by humans. When it comes to crafting actual legal language, however, experts say the risk of inaccuracies is too great.
AI may have been wrongly used in the New York and Colorado cases, but ChatGPT responded exactly as it was designed to operate, said BJ Ard, a technology and law professor at the University of Wisconsin Law School.
“Due to the way that these systems are built, if you ask them a question that is leading in the right way, they will come up with an answer that satisfies what you’re looking for, even though it might mean making something up,” Ard said. “Something that’s very plausible, something that’s in the right format, the right template, the right verbiage for what you’re looking for.”
Some mistakes will happen simply because people don’t understand the tool, Ard said, leading to “the unintentional obtaining and believing (of) false information, because of the way you’ve used or trusted this system.”
One of the most significant risks is that incorrect information produced by ChatGPT can, in fact, appear accurate and valid simply by the way it’s written.
“A problem with ChatGPT, as I see it, is that it does beautiful-looking work. It’s organized, paragraphed, sectioned. So it gives you a sense of quality by its appearance, that the content may not match,” said Walter Zimmerman, a Waukesha County technology and intellectual property attorney.
Zimmerman’s experience in law spans decades, including work as a prosecutor, litigator and professor at Marquette Law School. He now runs his own private firm where he works as a legal consultant in the areas of technology, copyright and intellectual property.
The ability for the tool to fabricate detailed court case citations is part of the ongoing risk analysis that Zimmerman calls “mind-boggling.”
The State Bar’s Kaiser shares many of the same concerns.
“When we see detailed information, we often think that that means it is accurate,” she said. “But that’s not the case here.”
Technology and law professor in the University of Wisconsin Law School, BJ Ard is pictured at the University of Wisconsin-Madison.
Legal experts have also raised concerns about how generative AI platforms store data. These tools operate as highly interactive chat bots, allowing a user to type a question or prompt and the AI to generate information and data in response. If any attorney were to input sensitive information from a legal case into ChatGPT in an effort to gather information or draft an argument, they could very well violate attorney-client confidentiality.
For Kaiser, privacy concerns are one of the most critical risks related to using generative AI models in legal work.
“We don’t know how that data is collected, we don’t know how it’s used, or who has access to it,” Kaiser said. “So we should not be entering any of that (confidential) information into those databases.”
OpenAI, the company that owns and operates ChatGPT, has come under fire for its unclear data collection and storage policies.
In March, the Italian government issued an emergency order barring ChatGPT from using personal data gathered from millions of that nation’s residents to incorporate into its chatbot training models. The next month, OpenAI issued guidance to ChatGPT users identifying ways individuals can select what input data can be saved in “chat history,” often later used by the bot to adjust and train future functions.
ChatGPT’s policy of saving “chat history” and a user’s input data validates Kaiser’s concern, she said.
“That’s one major confidentiality and privacy concern,” Kaiser said.
With online tools like ChatGPT rapidly evolving, and expanded uses discovered and explored on a regular basis, the legal industry has struggled to craft consistent guidelines governing how generative artificial intelligence can be used in law.
That doesn’t mean there are no rules dictating how an attorney in Wisconsin can and should use AI in their legal operations.
Kaiser points to the State Bar’s Rules of Professional Conduct.
“As with any tool we use, those rules apply to those tools,” Kaiser said.
For example, one rule prohibits the sharing of confidential client information and requires lawyers to adequately safeguard information pertaining to the representation of a client. Without assurances that data typed into ChatGPT isn’t to be shared elsewhere in the system, placing any case information into the AI model would automatically violate that rule.
Another potential violation pertains to the rule governing non-lawyer assistance.
The Cap Times asked ChatGPT to write about “the legal implications of artificial intelligence in courts.” 
Here’s what ChatGPT wrote in response, edited by a Cap Times human for brevity:
“Artificial intelligence (AI) has the potential to significantly impact court cases and the legal system as a whole. Its use raises several legal implications that require careful consideration:
“In conclusion, while AI has the potential to improve efficiency and accuracy in court cases, its integration into the legal system requires careful consideration of these legal implications to maintain fairness, justice, and the protection of individual rights.”
“It used to be ‘assistants,’ meaning people, but now it’s ‘assistance,’ and so that encompasses tools like AI,” Kaiser said. “If we don’t know how it’s used, and yet we’re required to have this direct supervisory authority over it, how do we manage that? And that’s the unclear part of it right now.”
Along with confidentiality and non-lawyer assistance, a general rule of conduct for members of the bar revolves around the concept of competence.
“We’re required to be competent. That rule requires that we keep abreast of changes in the law, including benefits and risks associated with relevant technology,” Kaiser said. “Unfortunately, we are at that stage where we’re not really sure about all the risks and how to deal with them. So again, we need to exercise great caution in that regard.”
Kaiser also raised concerns over discrimination and bias that might be built into the language model, as well as a risk of plagiarizing others’ work within the information produced by the bot.
While the State Bar of Wisconsin points to pre-existing rules of conduct as guidance for artificial intelligence use, courts elsewhere in the country have issued strict guidelines specifically related to ChatGPT and other generative AI.
In response to national attention on Schwartz’s New York case, a federal judge in the Northern District of Texas issued a standing order in May stating that anyone appearing before court must attest “either that no portion of any filing will be drafted by generative artificial intelligence … or that any language drafted by generative artificial intelligence will be checked for accuracy, using print reporters or traditional legal databases, by a human being.”
Walter Zimmerman, a Waukesha-based technology attorney, cautions that AI content is designed to look smart and readable but the accuracy of the content may not match.
While alarm is high, it begs the question, are there ways attorneys and other legal workers can use AI tools responsibly?
According to Zimmerman, the Waukesha County attorney, using AI in legal work presents potential benefits, in addition to the risks.
If an attorney wants to use ChatGPT for initial research, that might be OK, he said. But once the drafting process begins, it’s best to steer clear or thoroughly check the final product before moving forward. In the New York and Colorado cases, that last step was missed.
“Humans actually filed that brief, certified and signed it. What they didn’t do was check the citations,” Zimmerman said.
Some legal research databases are already incorporating AI components, according to Ard from the UW Law School.
“But I imagine that those are helping you at the research stage, and not at the drafting stage,” he said.
Drafting of simpler documents such as wills, leases, contracts and settlement agreements, for example, could be accomplished through an AI-generated template based on previously completed documents, Ard said.
“I could imagine a tool where a law firm or a practitioner has employed some sort of AI system that can incorporate and analyze the documents they’ve produced in the past,” he said.
An attorney could then examine what any of these simpler legal documents tend to look like when originally drafted and potentially automate the creation of major portions of those documents using an AI-built template. 
“Humans actually filed that brief, certified and signed it. What they didn’t do was check the citations.”
-Walter Zimmerman, attorney
“You can still then come in and adjust and tweak and clean it up in various ways,” Ard said. “But this could be work-saving. And this could be something that is actually still based on your own ideas and work product, as opposed to all of the unknowns of what may be going on with some of these public systems.”
For Plotkin of the Public Defender’s Office, the potential benefits and potential risks are about equal.
It’s no secret that Wisconsin’s public defenders — the lawyers who represent people who can’t afford their own attorneys in criminal cases — face a remarkable workload and stark understaffing. Counties across the state also are short on prosecutors, adding to a backlog of cases that has been described as a constitutional crisis in Wisconsin’s legal system.
The tool can help attorneys work through their cases more quickly, Plotkin said, but caution should be high.
“It’s very important that attorneys take the time, if they use that tool, to read through and make sure that all of the arguments track and are logical and are persuasive and apply to that immediate client’s case,” Plotkin said.
Potential uses for the technology are still in discussion at the Public Defender’s Office.
“We’re trying to figure out all the ways that it could have an impact on our work,” Plotkin said. “I don’t want to discount the possibility that it could create the material that would ultimately end up as part of the official court record. It’s just a question of how much autonomy you give to the AI. And how much are you making sure to review that work product in detail for yourself before it’s submitted to the court?”
In March, the state bar held a webinar for members titled “ChatGPT Artificial Intelligence – Overview and Implications.”
These types of trainings are the best way to educate practicing attorneys about safe and ethical uses of AI without issuing topic-specific guidance, Kaiser said.
The Wisconsin State Public Defender’s Office plans to explore specific guidance to better inform its staff on how and how not to use the technology.
“We’ve had very preliminary conversations about what the role of AI in a legal setting would be,” Plotkin said. “It is something that’s on our radar and something we’re going to start digging into this fall.”
As the bar continues its efforts to educate attorneys about responsible uses, individuals can also take steps to understand the dangers and benefits of generative AI tools.
“When it comes to protecting yourself from becoming misinformed, or from winding up in the situation that the attorneys who filed the brief with the false cases ended up in, you can avoid those by learning and understanding the limits of these things and what you’re getting into,” Ard said. “Especially if we’re talking about any kind of professional, whether we’re talking about attorneys, whether we’re talking about any other kind of researcher, none of them should be taking what these systems say just at face value.”
Editor’s note: Updated to correct the title and description of The State Bar of Wisconsin.
Erin McGroarty joined the Cap Times in May 2023 and covers politics and state government, while also investigating disinformation. Originally from Alaska, Erin brings nearly four years of experience covering state politics from the farthest north capitol in the country.
You can follow her on Twitter @elmcgroarty
Support Erin’s work by becoming a Cap Times memberTo respond to this story, submit a letter to the editor
State government and disinformation reporter
Your browser is out of date and potentially vulnerable to security risks.
We recommend switching to one of the following browsers:
Get up-to-the-minute news sent straight to your device.