Artificial Intelligence (AI) is everywhere these days—recognizing the words we speak, answering the questions we ask, translating the languages we can’t read, finding the danger spots in our x-rays, driving autonomous vehicles, improving results when we search, and, yes, slipping advertisements into our daily data stream with ever-greater precision.
These are amazing tools and they are improving rapidly. In speech recognition, for example, software is now more accurate than human listeners. Digital assistants like Alexa and Siri do sometimes answer in idiotic and entertaining ways, but the trend for all of these tools is upward—thanks to the towering trio of bigger data, faster computers in giant clouds, and smarter algorithms.
Let’s take a look at what AI is (and is not), how it’s being used in law today, and what its implications are for the profession.
What is AI?
First, AI isn’t really “artificial”—it’s all created by humans through very, very hard work—and it isn’t really “intelligent” either—the software doesn’t know what it’s doing or why. Second, AI is not a “what.” We can’t point to anything and say, “Yup, that’s an AI, right over there by the door.”
Rather, AI is a “how”—a large and growing collection of mathematical methods embodied in software for doing narrowly defined but very useful tasks: identifying your friends’ faces in photographs; classifying a chunk of text as about X rather than Y; asking intelligent questions to guide you to the best answers; navigating the universe of possible moves in a game of Go; correlating traffic rules with observations of objects around a moving vehicle; running robots in auto factories, and so on.
AI software incorporates knowledge and reasoning of three kinds:
- Semantic: Structures of language and relationships among concepts and objects in a topic domain.
- Logical: Rules of inference about objects, facts, and entities.
- Statistical: Patterns of pixels, texts, and events from which predictive probabilities can be calculated by “machine learning” methods.
Machine-learning software is different from traditional software in this key respect: the specific algorithms used to, say, distinguish cats from dogs in a billion photos, are adapted (“trained”) from and by foundational algorithms created by human programmers. Writing software to write software, in other words.
What is AI doing for Law?
Artificial intelligence is hard at work in our profession—automating expertise, improving legal research, predicting case outcomes, analyzing contracts, and taming the e-discovery monster—though often no “AI Inside” label is on the box.
These applications of AI can enable lawyers to:
- Serve more clients more effectively at lower cost.
- Create new revenue streams not dependent on hours worked.
- Focus time and expertise on work that requires the uniquely human and professional skills of empathy, judgment, creativity, and advocacy.
- Increase access to justice by meeting the legal needs of the poor and middle class.
Now, a very quick tour. Please keep in mind that the companies mentioned here are just a few of the innovators in this dynamic field. For a broader view, try the Neota Logic AI in Law Map and Bob Ambrogi’s Legal Tech Startups List.
Since legal research first went digital and for decades afterward, we have hunted for precedents by typing a few words, phrases, and Boolean connectors into a box, waiting for a list of results, and then plowing through them one at a time. It’s different now. Innovators like Casetext, DocketAlarm, Fastcase, and ROSS have flipped the bit.
With Casetext CARA, for example, you can toss a brief in the virtual hopper and get back a list of cases that are relevant but not cited—the ones your opponent couldn’t distinguish or (on a draft, we hope) the ones you missed in your first round of research. Fastcase embeds citation analysis in search results so you know instantly how a case has been treated, and the Bad Law Bot warns of negative treatment, especially helpful for avoiding argumentative traps. ROSS invites you to conduct research “like you’re talking to another lawyer” and delivers key quotes and points of law.
Analytics & Prediction
Predicting case outcomes is one of our stocks in trade. In the past, prediction has been the province of personal experience, firm experience, local knowledge, good judgment, and a lot of guesswork. Now, predictions can be built on data, applying old-fashioned statistical techniques as well as new-fangled AI techniques.
Why? Because more and more data about the operation of the legal system is becoming available in digital form. For example, several years ago Lex Machina built, at great cost, a large, deeply detailed, and proprietary database of patent cases, coupled with software to help lawyers analyze and predict outcomes. Since then, accelerated by its acquisition by LexisNexis, Lex Machina has applied its data-driven approach to nine other practice areas.
DocketAlarm mastered the Pacer maze and then other public data sources to provide analytical profiles on judges, parties, law firms, and attorneys, identifying win rates, time to decision, and other outcome-indicative factors. Published cases are the crests of the waves; data from dockets, motions, briefs, transcripts, and other records are the nutrient-rich seas beneath.
Premonition.ai is blunt and bold: their Lawyers By Win Rate service, “the world’s largest litigation database,” will tell clients “which lawyers win which cases in front of which judges.”
AI-powered expertise systems go beyond automation of documents to enable automation of substantive legal guidance and processes—like TurboTax.
Neota Logic offers a hybrid reasoning platform, which combines expert systems and other artificial intelligence techniques, including on-demand machine learning, to deliver answers to legal, compliance, and policy questions. (Disclosure: I am co-founder and chief strategy officer of Neota Logic.)
Of course, automation of document drafting has been a mainstay of legal technology for many years and is powerfully productive. In addition to its use by the private bar, document automation has served millions of self-represented people through Law Help Interactive, the Pro Bono Net platform for applications created with HotDocs (under a generous nonprofit license) and A2J Author from Chicago-Kent Law School and the Center for Computer-Assisted Legal Instruction (CALI).
Businesses recognize that risks and costs can be reduced by managing the rights, obligations, and risks in contracts, and rationalizing the processes by which contracts are initiated, negotiated, drafted, and managed through their lifecycle from execution to expiration.
Although contract analysis draws upon some of the AI techniques that are used for research and e-discovery, it is the tailoring of algorithms to the specialized tasks of contract analysis tasks that make for success. Across the lifecycle, we would like to:
- Identify and classify the “contracts” in a document collection.
- Identify and extract key clauses.
- Compare and evaluate key clauses in a draft contract against a company’s benchmarks or “market practice.”
- Extract clauses, names, dates, and other key terms as inputs to a custom risk analysis application or a contract lifecycle management system (CLMS) or to enrich the metadata in a document management system (DMS).
Contract analytics has been the fastest-growing area in AI-powered law, with new entrants joining the (relative) veterans eBrevia, Kira, LawGeex, RAVN, and Seal almost every month. Companies tend to specialize in one of the tasks described above, though there is a movement toward end-to-end solutions.
What’s the impact?
These advances in technology invite not only the optimistic question—what next?—but also a worried question: will lawyers be replaced by robots?
My answer: No, the robots are not coming for our jobs. Lawyers have, and our clients most need, the distinctively human capabilities of listening, understanding, empathy, judgment, creativity, argument, and advocacy. A majority of Americans at all income levels below the top need and often do not get or seek legal advice. There is plenty of work for lawyers.
And, as one British survey reports, people really do prefer to get legal advice from people. (They were more willing to entertain the idea of robot Members of Parliament.)
Nevertheless, the robots are coming for some of our work. As McKinsey frames the question, we need to ask what tasks can be automated?
Professors of economics and law Frank Levy and Dana Remus studied a deep set of actual, anonymized billing data to understand what lawyers actually do all day, and whether those tasks have high, medium, or low susceptibility to automation. Here’s what they concluded:
My own statistically unsupported estimate is that the high-risk slice is about 2x and medium risk about half of what’s shown. McKinsey’s numbers for high risk are around 5%.
Advances in technology—algorithms, data sets, computing power, startups focused on lthe aw—will undoubtedly move these numbers up over time. Lawyers must therefore be vigilant to shape their practices toward the work that only they can do, leaving the other stuff to the increasingly capable machines.
As Bill Gates said, “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.”
About the Author
Michael Mills is the co-founder and chief strategy officer of Neota Logic. Contact him on Twitter @michelmillsny.