ChatGPT, generative AI have Triad attorneys, scholars mulling benefits, risks – The Business Journals

Listen to this article 5 min
When OpenAI released its generative artificial intelligence chatbot ChatGPT to the public in November 2022, professionals in all sectors were given a transformational technology that held the power to change how they work.
But the technology’s quick and viral release meant that many were unprepared for its power — including lawyers, who have had to decide not only how to use it themselves but also how to advise their clients.
David Levine, a professor at Elon University’s School of Law, studies how new technologies should be implemented and regulated. His colleagues at the forefront of emerging technology were stunned by the way that ChatGPT was released by OpenAI, and scholars like himself were stunned by its capabilities.
“All the people who were inside OpenAI and had privileged knowledge were more or less stunned when on Nov. 30, 2022, more or less, Sam Altman (OpenAI’s CEO), unilaterally decided that the world should now have to deal with generative AI,” Levine said. “I use the word ‘deal with’ because we weren’t obviously prepared to deal with it. There was no public discussion. Regulators were caught flat-footed.”
While OpenAI’s ChatGPT was a new product, artificial intelligence is not new to the legal field. It’s been around for decades. Lawyers use discriminative models of artificial intelligence when they use AI-powered platforms like WestLaw and LexisNexis. Discriminative AI excels at classifying data while generative AI creates data. It’s these generative models that are forcing businesses and their advisers to adapt.
 In 2023, legal research services long known for their discriminative-powered tools launched their own generative products. Thomson Reuters’ Casetext launched the first AI legal assistant called Co-Counsel, and a few months later, LexisNexis launched Lexis+. Co-counsel can review documents, draft questions for a deposition, summarize information and craft emails, among other tasks.
 A Thomson Reuters Institute survey of law firm lawyers last spring showed that 82% of those surveyed believed that ChatGPT and generative AI can be applied to legal work. A smaller group — 51% said that generative AI should be applied to their work.  
 Like lawyers all over the world, local attorneys and law professors are grappling with how to use generative AI as a firm and how to advise clients. They tend to agree that AI is a tool that could wield either positive or negative results, depending on how you use it. While most lawyers are using and experimenting with AI, they are proceeding cautiously.
 “Law as a profession, on the whole, has been traditionally very slow to adopt technology,” said Keith Robinson, a professor at Wake Forest University School of Law. “They want to be the first follower, but they don’t want to be the first adopter. Because the riskiest thing to do is to be the first adopter. Lawyers are traditionally risk averse.”
Risk is an element that Bill Koch, chief knowledge and information officer at the Winston-Salem firm Womble Bond Dickinson, evaluates when he evaluates an emerging technology that his firm could implement. 
After OpenAI released ChatGPT in November 2022, Womble Bond Dickinson developed a generative AI policy and generative AI governance platform. The firm also hosted an AI bootcamp for its staff. In the summer of 2023, the firm established its Artificial Intelligence and Machine Learning Practice. 
“We knew in some ways that this was a transformative technology, but we also wanted to remain cautious while we experimented with generative AI,” Koch said.
Womble Bond Dickinson is in the process of creating in-house AI solutions using Microsoft Azure and retrieval augmented generation. The firm is also piloting generative AI capabilities for its employees like Microsoft’s chatbot called Copilot. 
Lawyers at Koch’s firm use AI to review deposition transcripts and summarize key points. They can also use it for a “document chat,” which allows lawyers to input documents and then ask questions about the contents of that document.
Lawyers at Womble Bond Dickinson also use generative AI for legal research, which Koch said can be a taboo subject because of a case in New York in which two lawyers submitted a legal brief with six fictitious cases that ChatGPT hallucinated
Womble Bond Dickinson lawyers do not open up their laptops to ChatGPT to conduct legal research. They take protective measures by using providers with a narrow dataset. Using a small set of data instead of the wide swath of information from the internet is called “grounding.” Koch says his firm grounds the data in the language of the cases and other primary law.
To avoid a disaster like the one in New York, firms such as Womble Bond Dickinson and Brooks Pierce in Greensboro require all documents used in court to be reviewed by a human before they are submitted.
Hallucination is not the only concern that attorneys have about AI. The technology presents threats on all kinds of legal issues from data privacy to intellectual property to contract law to patents.
Will Quick is a partner at Brooks Pierce who practices data privacy and cybersecurity as well as complex business litigation. He and his firm represent a wide array of clients, and many of them are concerned about intellectual property rights and personal data security.
“I think companies are trying to cope with how to use AI. They really need to think about addressing and putting it in place,” Quick said. “A lot of what we do is putting in place policies around when you can use AI and for what purposes and what sort of permissions you need to get and what sort of assurances you need.”
His media clients are especially concerned about their content being fed into large language models that train AI. Quick is also advising his clients to ensure that any generative models they use are secure on the backend so that their customers’ private information is not released or misused. He said lawyers are taking their own advice and not placing confidential client information into chatbots that could be compromised.
Since many firms require their lawyers to double-check generative AI outputs, lawyers are asking if AI truly improves efficiency.
“We’re still at the relatively early stages of how generative AI can be most efficient,” Koch said. “In some cases, you’re going to be more efficient, and in some cases, you may not be.”
Lawyers are commonly billed by the hour, so generative AI tools could result in attorneys spending fewer hours working with a client. There’s the possibility that greater efficiency could mean that attorneys can work more clients into their schedule or take on more tasks from a singular client.  
For Levine, the Elon law professor, generative AI is poised to be either a tool for good or a weapon for harm. He says that the way in which the technology was rolled out and the ways it is protected by trade secrecy law have damaged the public’s ability to use the technology in a way that allows for the most beneficial results
“We’re all very much operating on faith that private entities are going to do the right thing for the public and frankly, you know, that’s rather naive to think that that’s how things are going to play out,” Levine said. “So we have a technology that was rolled out in a unilateral way that the people who operate it can’t fully explain.”
Trade secrets are generally defined as having economic value because the information is unattainable to outside parties and provides a competitive edge. While OpenAI’s trade secrets on ChatGPT may give it an advantage, Levine says that it leaves regulators and the public with little information to act upon.
“The best way to deal with the unknown is to have information upon which to start making educated decisions and meaningful analysis, and so long as trade secret law is in place to prevent public assessment of this kind of technology, we’re going to be guessing,” Levine said. “Whether we have to guess is certainly an issue, but we could definitely have more information available.”
As teachers instructing the next generation of lawyers, both Levine and Robinson tell their students to not use artificial intelligence as a crutch that replaces analytical reasoning and critical thinking. Yet, they still encourage students to learn the technology since the firms they will be working at, like Brooks Pierce and Womble Bond Dickinson, are using it every day. Robinson even advises his students to take prompt engineering courses.
“If you’re also able to understand how to use AI and how to prompt generative AI to make yourself more efficient, I think that’s going to make you more valuable to your particular industry or company,” Robinson said.
Christa Dutton, a former Triad Business Journal intern, is a freelance contributor.
 
 
 
 
 
Outstanding Women in Business 2024
Celebrate the power of women in an upbeat Triad event!
Fast 50 Awards 2024
Has your Triad business been in growth mode for three years or more? If so, nominate today to qualify for Fast 50 – the 50 fastest-growing companies in the region.
© 2023 American City Business Journals. All rights reserved. Use of and/or registration on any portion of this site constitutes acceptance of our User Agreement (updated January 24, 2023) and Privacy Policy (updated December 19, 2023). The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of American CityBusiness Journals.

source

Jesse
https://playwithchatgtp.com