Model Ethics Rules as Applied to Artificial Intelligence

Today, the public hears more and more of what artificial intelligence (AI) can do in the real world—things that are tangibly beyond encounters with robots or androids portrayed in popular science fiction movies and novels. Today, many people think AI is for simple tasks, like using voice commands to call a friend or purchase household goods from online retailers. But AI also is detecting cancer better than photographic slide pathologists, and governments are applying AI to detect specific individuals who are violating COVID-19 quarantine.

Commercially viable AI took shape in the mid-2010s, with the practical convergence of cloud computing and collections of big data. In short, affordable, large-scale probabilistic computations based on accumulated data stored on the cloud gave rise to our present use of AI for commercial purposes. The introduction of commercially viable AI permits software applications to apply accurate artificial reasoning to mundane tasks in our daily lives right now, with limitless possibilities in the future. Today, with simple off-the-shelf laptops, computer scientists can variably scale their computing power as needed to apply statistical algorithms to a wide range of datasets stored on the cloud, ranging in size from a few hundred to hundreds of billions of data points, to accurately predict future experiences and glean new insights on a wide range of inquiries.

How AI is Generally Applied to the Legal Field

Presently, the area of computer science of most impact to practicing lawyers is natural language processing (NLP) AI. NLP AI often capitalizes statistical tendencies observed in spoken human language. For example, there is a highly probabilistic tendency for some words to clump together more than others, and computer programs can “learn” and exploit that statistical word clumping. For example, in the AI field of legal analytics or informatics, a lawyer will ask a computer or data scientist to create an NLP algorithm or “model” that looks for clumping of a familiar set of words used in the legal profession. This model may be used to review all opinions and orders delivered by Judge X, to gain insights on the likelihood of a future ruling by Judge X given similar circumstances. A slight variant would be Judge X asking the same computer or data scientist to help create software for auto-generating his or her orders using the data set of preexisting opinions and orders.

A wide array of legal software packages are available to auto-generate briefs, legal research results, and administrative or governmental forms, to locate key evidence from a document dump using text, sound and audio AI software tools, and even to provide virtual receptionists and paralegals in the forms of chatbots. At this time, a practicing lawyer is most likely to encounter this form of AI-enhanced software.

Big Data Analytics, Litigation, and E-Discovery

Money Ball is a movie based on the true story of how the Oakland A’s used computer-based analytical statistical modeling to build a winning team on a severely limited budget. This “money balling” discipline of computer science is commonly referred to as data analytics or business intelligence. Often data or computer scientists will “webscrape” the entire publicly available internet—essentially collect at historically unprecedented scales every single bit of digital information about a desired topic or target—to form a collective data pool, called a “data lake,” on a cloud storage database.

Today, global law firms and corporations are effectively using statistical modeling and commercial AI to glean insights or intelligence on all aspects of litigation, especially eDiscovery, and on the activity of judges and competing law firms and attorneys. Those who have the most resources have the greatest competitive advantage using data analytics today. In time, the cost of data analytics intelligence will go down for most legal practitioners as the technology grows.

Closing the Justice Gap

Historically, in the field of computer science, software such as open-source platforms, and even the functional infrastructure of the internet, are based on a democratic process providing equal access for all users. A highly effective and active subset of lawyers and software developers are dedicated to using AI to ensure that access to legal remedies becomes truly democratic. These lawyers and developers believe that equal access to justice is a right and not a privilege, and see AI as a means for leveraging the same work output as a large law firm would provide. Some notable groups in this effort are the Legal Hackers, as well as the Free Access to Law Movement (FLAM). Other AI-driven software platforms, such as Torchlight Legal, provide pro bono immigration services for asylum law.

One present objective is to remove the pervasive paywall of privilege that exists for accessing legal services. AI-driven software tools that auto-generate documents for no-fault divorces and parking ticket appeals serve as a model for an affordable alternative to high-priced human lawyers. In time, as AI continues to grow, I see AI software handing even complex mergers and acquisitions and bankruptcy proceedings.

The ABA Model Rules of Professional Responsibility and AI

ABA Model Rule of Professional Conduct 1.1:

If you have arrived at this point and not yet fallen asleep, you have successfully satisfied the ethical requirements outlined in ABA Model Rule 1.1.

Soon, as legal software edges toward the end goal of satisfying the Turing test (a test to determine whether a computer is capable of thinking like a human being), legal professionals will incorporate AI-driven software in their daily practice. The ethical and economic concerns of lawyers being entirely replaced by AI software “robots” are largely unfounded. AI generally assists and enhances the professional decisions made by lawyers today, a concept in computer science called augmented intelligence. By analogy, lawyer-enhanced AI will be like a driver who resumes command of a cruise control feature in a car on a road trip.

It’s critical to note that AI software is fueled by data, including legal information that is likely to be subject to the duty of client confidentiality, and possibly evidentiary privilege. Accordingly, much of the use of AI software in a law practice is ethically rooted in the same discussions that relate to mitigating the existing risks associated with client data privacy and security.

Specifically, Rule 1.1 states: “A lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness, and preparation necessary for the representation.”

Commentary to Rule 1.1 further clarifies and asks for some level of technical knowledge: “[t]o maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology… ” This requirement is reinforced by insurance carriers informing legal practitioners of the risks of use of AI-driven software, such as, among others, the risk of cyber-attacks and data breaches.

To date, 37 states have adopted a technical competence rule arising from Rule 1.1. Lawyers should strive to understand the benefits and risks of applying new technologies in a legal practice. Ultimately, Rule 1.1 does not require that lawyers possess superior technical knowledge, but a general knowledge of the technology so as to effectively consult with experts when designing, adopting and using new AI software applications in their law practice, as well as ethically advocating for their clients.

Let’s look at three other scenarios regarding the model rules of professional conduct and AI.

ABA Model Rule of Professional Conduct 1.6

Rule 1.6 (a), with limited exceptions, states, “A lawyer shall not reveal information relating to the representation of a client unless the client gives informed consent… ” Rule 1.6(c) states, “A lawyer shall make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.”

Frequently in e-discovery, data analytics, and legal research, data is stored and retrieved on the cloud, and in instances of big data-like data analytics or AI modeling, data is retrieved in large quantities from a data lake as discussed above. Legal practitioners should be transparent with their clients as to how digital client information is stored and retrieved with respect to client confidentiality and privileges. Legal practitioners should consider incorporating a digital information and AI software disclosure statement in their engagement letters or initial client interview packets.

Thankfully, similar issues of patient data confidentiality have already been accommodated by many software tools in the health care field, as mandated by the Affordable Care Act and earlier federal laws such as HIPAA. Because of a federally mandated head start toward full automation of electronic health records, legal practitioners can often incorporate existing commercial patient privacy software and health care IT network infrastructures as an ethical and economical basis for adding leading-edge, robust privacy and security features to their law practice.

Consider the example of a legal practitioner applying anonymization algorithms and techniques to client data as it is received and stored. Categories of information from the collected raw data, such as names, addresses, expenditures, and other private information can be redacted using digital anonymization strategies to ensure client confidentiality under Rule 1.6. To limit costs, it is critical that a legal practitioner closely communicates to technical professionals what categories of data will require anonymization. In practice, the anonymized data is encrypted and can be used to build AI models while keeping critical anonymization in place. Such anonymization techniques are already used heavily in the medical field, so a pool of experienced healthcare software professionals is available for law practices to draw from. Similarly, law practices can look to experienced financial industry software professionals to apply the latest anonymization and privacy and security techniques used in banking and accounting.

Despite all the technical jargon and concepts, a legal practitioner should understand that data is ultimately collected and used by humans with software tools. Lawyers should rest assured that, fundamentally, humans design and curate the datasets for legal client records as well as to drive AI legal software algorithms. As such, lawyers must consider the underlying human bias inherent to any dataset used by AI algorithms. Lawyers must remain informed and in constant communication with their software professionals to ensure that the optimal results from AI algorithms arise from the highest-quality data.

U.S. export controls also require digital data to stay within the physical boundaries of the United States (see 15 C.F.R. §730 et seq). In practice, a legal practitioner should be mindful as to where the computer servers storing client data are geographically located while accessed “on the cloud.” Obtaining the location of a client’s digital data is very easy for software professionals to determine as well as to request that such data servers be located exclusively in the USA. Amazon Web Services, among other cloud vendors, already provide software architectures for ensuring HIPAA privacy that can readily be exported to constructing legal databases respecting confidentiality, as well as for selecting the physical location of the cloud server network that contains the digital data.

ABA Model Rule of Professional Conduct 5.3

Rule 5.3 states, “Responsibilities Regarding Non-lawyer Assistance… (b) the lawyer having direct supervisory authority over the nonlawyer shall make reasonable efforts to ensure that the person’s conduct is compatible with the professional obligation of the lawyer…”

While the above discussion of Rule 1.6 applied confidentiality and privilege strategies to the data, Rule 5.3 similarly extends those requirements to “legal assistance” provided by legal software and software professionals. This includes, for example, third-party software professionals as well as assistance from AI-driven legal software itself.

In practice, in light of Rule 5.3, software professionals must be aware of a lawyer’s obligation to their clients under the Rules of Professional Conduct. Legal practitioners should strive to educate these software professionals on legal confidentiality and privileges through workshops, online videos, and checklist handouts. A lawyer should help software professionals understand the concepts of client confidentiality, preserving evidentiary privilege as applied to digital data privacy, security, and while using client data to run artificial intelligence software tools. As one solution for maintaining additional client confidentiality and privilege under Rule 1.6(c), a lawyer should be aware of the processes of software professional workflows, such as Agile approaches to software development, as well as technical concepts relating to data privacy, security, and AI software applications to ensure that lawyers communicate effectively within tech culture.

A slightly more difficult consideration is applying Rule 5.3 in light of ensuring that software, AI-driven or not, adheres to the model rules, namely by handling the privacy and security of client data, including attorney-client confidentiality and privilege. Generally, many but not all software professionals are mindful of data privacy and security while developing commercial software, but may need additional education by legal practitioners on the topic of data privacy and security as it relates to client confidentiality and privilege. Currently, no federal law in the U.S. requires software professionals to adhere to data privacy and security.

As a practical matter, one good legal reference to data privacy and security is outlined in the European Union’s General Data Protection Regulation, (GDPR), and its legislative protégé within the U.S., the California Consumer Privacy Act (CCPA), which currently acts as a proxy for a federal body of law on data privacy and security in the United States. Other helpful references are the existing tapestry of federal laws often associated with health and financial privacy laws.

ABA Model Rule of Professional Conduct 2.1

Rule 2.1 states, “In representing a client, a lawyer shall exercise independent professional judgment and render candid advice,” which potentially can involve referring “not only to law but other considerations as moral, economic, social and political factors, that may be relevant to the client’s situation.”

At this time, AI algorithms allow commercial legal software to automatically generate legal documents from briefs to patent search results and judicial opinions. The economic and time-saving temptations of simply signing-off on AI-generated work products are great to legal practitioners, especially in private practice.

Rule 2.1 directly addresses the ethical duty of a lawyer to avoid the temptation of entirely relying on the output of AI legal software, and to exercise independent professional judgment to supplant those conclusions directly rendered by the AI software. The independent judgment of lawyers under the Rule goes beyond just legal matters, but must also account for the totality of factors associated with the client’s situation, namely moral, economic, social, and political factors, so as to remain in the four corners of Rule 2.1. Indeed, the intent is for legal practitioners to think long and hard about the interests of their clients before relying on AI-generated work products.


In general, AI technologies will create unique challenges for legal practitioners beyond those presented by collecting data for cloud computing while ensuring lawyer-client confidentiality and privilege. Client data is being collected, managed, used, and stored indefinitely in new ways with today’s AI technology. The Model Rules of Professional Conduct are clear in requiring lawyers to ensure these evolving tools do not endanger client confidentiality and privilege.

About the Author

Rafael “Rafa” Baca is a patent attorney with The Adelante IP Law Group and software developer and is chair of the ABA Artificial Intelligence Committee.

Send this to a friend