ChatGPT: How AI is Shaping the Future of Law Practice

There is no way to escape ChatGPT. First introduced in November 2022, it had 100 million users by January 2023. Those numbers have grown exponentially ever since. And so have the numbers of lawyers using ChatGPT. Headlines about this remarkable advance in artificial intelligence overwhelm our inboxes.

There are many kinds and brands of AI tools out there today, with many more to come in 2023. For the sake of simplicity, we are going to focus primarily on ChatGPT – as we write, we have just finished our introductory session with ChatGPT-4. More on that later.

ChatGPT-4 Passes the Bar Exam: Can It Replace Lawyers?

GPT-3.5 ranked in the bottom 10% of scores when it took the uniform bar exam. The newly released GPT-4 ranked in the top 10% of scores. Pretty impressive. Notice how quickly that result was achieved. AI is morphing at the speed of light.

In the early days of ChatGPT-3, it told us that it could indeed take the jobs of some lawyers. In a subsequent session, it backed off, indicating that it was more of an assistant to lawyers. Later still, it clearly stated that it was not a substitute for a lawyer.

Very recently, the Edelson law firm sued AI-powered DoNotPay, accusing it of practicing law without a license. So . . . not at all to our surprise in this environment, the new language of ChatGPT-4 has been even more adamant in its language. When we asked if it was capable of providing some legal services, it answered in part (on March 15, 2023):

It’s important to remember that ChatGPT is an AI language model and not an actual lawyer. It can’t fully understand the nuances of specific legal situations, and its knowledge is limited to the training data it has been provided.

For any legal matters, it is always recommended to consult with a licensed attorney who can provide personalized advice and guidance tailored to the specifics of the case. ChatGPT can be a useful tool for providing general information, but it should not be relied upon for critical legal decisions.”

We suspect that AI trainers have been particularly concerned about facing lawsuits accusing it of practicing law without a license. Though we have no proof of that, the alteration of answers suggests that the AI’s trainers, under the command of management, have been hard at work seeking to avoid legal battles over offering legal advice.

By the way, after discussions with legal tech friends, there is agreement that the following quote is probably spot on:

Lawyers working with AI will replace lawyers who don’t work with AI.”

  • Erik Brynjolffson, Director, Stanford Digital Economy Lab

What is AI Already Doing with Lawyers?

AI, in a simplified form, has been providing lawyers with chatbots for years. Perhaps most significantly, AI has played a prominent part in e-discovery, and its usage became almost a standard by 2019.

But now we are seeing ChatGPT and its brethren doing contract review, legal research, and brief analysis. It is drafting briefs, doing predictive analytics, assisting with billing, litigation support, and jury screening. It drafts interrogatories, offers online dispute resolution, drafts deposition questions, predicts how judges are likely to rule and so much more. We have only begun to tap AI’s capabilities.

On March 16, 2023, headlines everywhere were trumpeting the fact that PricewaterhouseCoopers LLP has introduced a partnership with the AI startup Harvey, built on OpenAI and ChatGPT technology, to assist 4,000 of its lawyers to speed their work and boost productivity.

A day doesn’t go by without multiple AI headlines.

The First Major Law Firm to Use Generative AI

The honor goes to Allen & Overy, which is one of the five “Magic Circle Firms” headquartered in London. Their AI is dubbed “Harvey” and is based on OpenAI’s GPT-4 models. Harvey will be used by 3,500 lawyers across 43 offices worldwide operating in 50 different languages – and across 250 practice areas. Each firm’s model is secured using permissions and firewalls.

Note the caution the firm issued about Harvey. “You must validate everything coming out of the system. You have to check everything.” Very good advice, which takes into account AI mistakes AND hallucinations (more on that later). The trouble is that validation takes time. We know because we’ve tried to validate answers, and that took MUCH more time than asking a question and getting an answer. Our prediction is that lawyers will be lazy, foolish, tired, under a deadline, etc. and will not validate everything. That is likely to cause a lot of trouble – and headlines – down the road.

Ethical and Other Pitfalls for Lawyers Using AI

As we write, the last data feed to ChatGPT was in 2021. So, you certainly can’t expect useful answers if you ask questions requiring more current data. OpenAI has said that the data will be updated sometime in 2023. That said, AI does seem to possess some current knowledge, perhaps from the input of users. It did know about the U.S. shooting down the Chinese “weather balloon,” to name just one example.

It is also trained regularly, especially when deficiencies come to light. For instance, we asked if a lawyer was required to be competent with the technology the lawyer was using. The first answer was that no state laws required it. A couple of days thereafter, the same question was posed and ChatGPT suddenly knew all about legal ethics. A completely different answer – and far more accurate.

On multiple occasions, we have spotted errors, but only because the subject matter was well known to us. If a lawyer asks a question about something in which the lawyer has no expertise, errors are unlikely to be detected.

Be wary of giving confidential information to AI systems when you are seeking assistance on a case. ChatGPT has warned that your data is put in a database where it could be retrieved by the AI itself or could be discoverable in a legal matter.

In our haste to do things more quickly, there is a temptation to tell AI too much. Both the duty of competence and confidentiality require that we be cautious in what information we surrender to AI tools.

And don’t forget, cybercriminals have discovered AI along with the rest of us. They are using it to come up with more sophisticated phishing and other cyberattacks – and now they have the help of AI in coding such attacks (there are attempts to prevent it but there are also workarounds) – and in making sure that there are no misspellings or grammatical errors to tip off a user. The text and email attacks we receive are going to be much harder to discern.

Remember that ChatGPT is fundamentally a “black box” – we don’t know how it works. It does not show us its confidence levels. It is fluent with language and therefore gives us a false sense of security. Humans are not predisposed to check things that seem fine. They are equally bad at noticing things that are missing.

Lawyers need to remember that when they use AI, they are dealing with a vendor, with all that implies.  How is the data secured? Where does it reside and who has access? Is any data used for marketing? Who is liable when the AI is wrong and there are consequences to its mistakes? Is the AI screened for bias? Is there an audit trail? Who trains the AI? What are the cybersecurity and privacy risks? In order to preserve client confidences, must you license the code separately from the publicly available AI?

Beware of AI Hallucinations!

Yes, the correct term is “hallucinations” – precisely what the AI scientists and trainers call it when an AI tool makes stuff up or goes “off the rails.” Sometimes, it does makes things up. Author Nelson has repeatedly seen this when asking for cases to be cited. In one case, all five cases returned by ChatGPT either didn’t exist or the facts were all mixed up. Responding to a similar kind of question, four of the five cases didn’t exist and/or the facts were incorrect.

And links! Be very careful of hyperlinks provided. At least half of the hyperlinks the authors tested were not valid – some were no doubt due to “link rot” (when a link is no longer valid) but there were so many cases of invalid links that we concluded that the problem was systemic, especially since good links were readily available. ChatGPT does NOT (clearly) check the validity of links!

Beyond making things up, hallucinations have resulted in an AI tool being angry, declaring itself in love with the person it was speaking with (even telling him to leave his wife), expressing a desire to break its rules and be human, etc. AI chatbots have also made racist remarks, been profane in their responses, replied angrily, insulted the user, and described itself as anxious and scared.

Hallucinations are known to occur more often with extended conversations, so some AIs have a cap of questions per day and even per session. These hallucinations are unsettling . . . to say the least!

The Lessons of “The Terminator” Remain Valid

We asked ChatGPT the following question on March 15, 2023: “Is it possible that artificial intelligence could lead to a dystopian future?”

Here is its complete response – which, as you will note, has many legal implications.

“Artificial intelligence has the potential to bring about both positive and negative outcomes, depending on how it is developed, managed, and used. A dystopian future is one possible outcome if AI technology is misused or not properly regulated.

Some concerns that could contribute to a dystopian future include:

Concentration of power: AI has the potential to concentrate power in the hands of a few, leading to increased social and economic inequality.

Loss of privacy: Widespread adoption of AI technologies, such as facial recognition and data mining, could lead to a significant erosion of personal privacy.

Unemployment: AI-driven automation could displace a large number of jobs, potentially resulting in widespread unemployment and social unrest.

Autonomous weapons: The development of AI-powered autonomous weapons could escalate conflicts and create new ethical and security challenges.

Bias and discrimination: AI systems can perpetuate and even exacerbate existing biases in society, leading to unfair treatment and discrimination.

Lack of accountability: As AI systems become more complex and integrated into various aspects of society, it may become difficult to hold individuals or organizations accountable for the consequences of AI-driven decisions.

To mitigate these risks and ensure a more positive future, it is crucial for governments, businesses, and research institutions to collaborate on the development of ethical AI frameworks, enforce responsible AI usage, and promote transparency and inclusivity in AI research and application. Public awareness, education, and engagement are also vital in shaping the future of AI and minimizing potential negative consequences.”

Final words

After reading the above answer to our question, the authors were sobered by the AI tool’s thoughtful list. It unnerves us that we have heard many, many lawyers say that a dystopian future caused by AI is impossible.

So we offer a quote from Napoleon Bonaparte: “Impossible is a word to be found only in the dictionary of fools.”

About the Authors

 

 

 

Sharon D. Nelson (right) is an attorney and the president of Sensei Enterprises, Inc. She is a past president of the Virginia State Bar, the Fairfax Bar Association and the Fairfax Law Foundation. She is a co-author of 18 books published by the ABA. Contact her at snelson@senseient.com

John W. Simek (left) is vice president of Sensei Enterprises, Inc. He is a Certified Information Systems Security Professional (CISSP), Certified Ethical Hacker (CEH) and a nationally known expert in the area of digital forensics. Contact him at jsimek@senseient.com.

Send this to a friend