How to Manage AI Without Becoming a ChatGPT Lawyer

The ChatGPT lawyer.  A sobriquet unimaginable until it appeared everywhere in the news this spring in connection with a run-of-the mill personal injury case, Roberto Mata v. Avianca, Inc., S.D.NY. (22-cv-1461).  Plaintiff Roberto Mata sued the airline Avianca for an injury he suffered when a serving cart hit his knee during a 2019 flight.  Avianca sought to dismiss the case based on the statute of limitations.  Plaintiff’s counsel prepared a 10-page brief, citing cases like Martinez v. Delta Air Lines and Varghese v. China Southern Airlines.  When defense counsel could not find these cases anywhere, it was revealed to Judge P. Kevin Castel that plaintiff’s counsel had used ChatGPT to draft the brief.  What is ChatGPT?

ChatGPT is an artificial intelligence (AI) generator of written material.  ChatGPT stands for Chat Generative Pre-trained Transformer, and is a computer program developed by OpenAI as a language chatbot launched on November 30, 2022.  Its core function is to mimic a human conversation, but its output can be used as a written product like a story, an article – or in the Mata case – a legal brief.  ChatGPT is a creator of content as opposed to Westlaw and Lexis/Nexis search engines, which search for caselaw, statutes, and other legal material.

The Mata brief contained fabricated citations to nonexistent cases and included wholesale fictional quotes.  The court was very unhappy when counsel admitted to having used ChatGPT.  Judge Castel wrote a 34-page order for sanctions requiring plaintiff’s counsel to:

  • disclose the order to the client;
  • disclose the order to each judge falsely identified as author of a fake cases; and
  • pay a $5,000 fine.

In news articles from the New York Times to Lawyers’ Weekly, plaintiff’s counsel became known as the “ChatGPT Lawyer,” and not in a good way.

So what steps should a lawyer take to avoid becoming the next “ChatGPT Lawyer?”  The obvious answer would be not to use ChatGPT or other AI program to write their briefs.  But in a situation where a lawyer has too many cases and not enough resources, the advantage of using AI to efficiently produce routine motions and briefs may be a godsend.

How to avoid being the next sanctioned ChatGPT lawyer.

Ultimately, though an attorney may be using AI to draft the brief or memorandum, the attorney is still responsible for its content – its legal and factual content – as if the lawyer had written it from scratch.  Here are three steps to follow to ensure the ethical and proper use of AI.

  1. Choose the AI Carefully

There are two critical issues to consider when choosing an AI engine.  The first issue is what database does the program use as its source?  Every AI tool must draw from a source of information about the state of the law – caselaw, statutes, regulations.  As the ChatGTP case illustrates, it is entirely unclear where ChatGTP drew its information from, other than its own “imagination.”  Some companies have purportedly designed AI programs for use by attorneys that use legal databases for their source material.

The second issue is what happens to the client information that is provided to the AI program while prompting it to draft a brief?  An attorney may be disclosing confidential client information to the AI tool and must know whether it retains the prompts and the provided information.  It appears that ChatGPT, for example, retains all the information provided to it in its database.  Depending on how this information is used, such disclosure of a client’s confidential information could constitute a violation of Rule 1.6 of the ABA Model Rules of Professional Conduct, particularly as it is interpreted in ABA Opinion 480.  Rule 1.6 provides that:

A lawyer shall not reveal information relating to the representation of a client unless the client gives informed consent, the disclosure is impliedly authorized in order to carry out the representation or the disclosure is permitted by paragraph (b).

  • . . . (exceptions that do not apply)

(c)  A lawyer shall make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.

“I did not know that the AI would keep the client information and incorporate it into unrelated briefs,” is not an excuse that a court – particularly if it is already skeptical of the use of AI – would appreciate.

  1. Check the Citations and the Law

In any event, an attorney is still responsible for the citations and the correct statements of law.  Rule 11 of the Federal Rules of Civil Procedure explicitly places the responsibility for the accuracy of the law and facts with the attorney signing the pleading or brief:

Rule 11. Signing Pleadings, Motions, and Other Papers; Representations to the Court; Sanctions

(a) Signature. Every pleading, written motion, and other paper must be signed by at least one attorney of record in the attorney’s name—or by a party personally if the party is unrepresented. . . .

(b) Representations to the Court. By presenting to the court a pleading, written motion, or other paper—whether by signing, filing, submitting, or later advocating it—an attorney or unrepresented party certifies that to the best of the person’s knowledge, information, and belief, formed after an inquiry reasonable under the circumstances:

. . .

(2) the claims, defenses, and other legal contentions are warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law or for establishing new law;

(3) the factual contentions have evidentiary support or, if specifically so identified, will likely have evidentiary support after a reasonable opportunity for further investigation or discovery; and

. . .

Accordingly, an attorney must carefully check an AI program’s output, including all sources and citations.  She must look up cases, statutes, and any quoted language, and confirm that they are being used correctly and accurately, as if the attorney had drafted the brief from scratch, because that is indeed the standard.  Most importantly, do not just ask the AI tool to verify its own output.

  1. Know the Court and the Judge.

Courts are starting to issue standing orders concerning the use of AI in briefing.  An attorney litigating a case must learn whether the court or judge permits the use of AI, has any special rules in place concerning its use, or requires any certification.  Each court or judge that issues such special orders is impliedly authorizing the use of AI, subject to the order.  Courts appear to be permitting the use of AI, perhaps because it is virtually impossible to determine whether a brief has been written by an AI program.  Judges certainly have taken notice of the use of AI, particularly after the New York case, and are starting to issue standing orders concerning the use of AI in their courts.  Such orders usually reflect the court’s concern about: (1) disclosure of the use of AI; and (2) certification from an attorney that they have checked the work of the AI program.

For example, U.S. District Judge Michael M. Bayslon (E.D. PA.) issued a standing order that says:

If any attorney for a party, or a pro se party, has used Artificial Intelligence (“AI”) in the preparation of any complaint, answer, motion, brief, or other paper, filed with the Court, and assigned to Judge Michael M. Baylson, MUST, in a clear and plain factual statement, disclose that AI has been used in any way in the preparation of the filing, and CERTIFY, that each and every citation to the law or the record in the paper, has been verified as accurate.

Several jurisdictions in Texas have also issued their own standing orders.  Standing orders concerning the use of AI will likely become commonplace.  Obviously, an attorney planning to use AI to generate a brief would be well advised to find any applicable standing orders and to carefully comply with those orders, particularly concerning disclosures and certifications.

No long ago, electronic legal research was the next “new thing” and practitioners looked upon it with skepticism – now it is commonplace and often the first step for legal research.  ChatGPT will likely follow the same path.  Lawyers can use this new technology to their advantage, but they should do so with care.

 About the Author

Edward S. Cheng is a partner at Sherin and Lodgen LLP in Boston.  He has over two decades of litigation experience, specializing in complex commercial disputes, professional malpractice cases, insurance coverage disputes, and real estate litigation. 

Send this to a friend