I cannot teach anybody anything. I can only make them think.
—attributed to Socrates
Law is a science, and… all the available materials of that science are contained in printed books…
—Christopher C. Langdell
Discussions on artificial intelligence and law seek to find the diminishing demarcation between the human-only part of lawyering (multi-disciplinary integration, especially regarding strategy; “reading” the client; emotional intelligence) and those lawyering skills more efficiently accomplished by artificial intelligence (legal pattern recognition research for document and contract review). But the more meaningful inquiries have not been asked: will human-AI collaboration advance the lawyer’s counseling? Can the AI system help the human lawyer overcome bias and produce better decisions?
Advanced and dynamic artificial intelligence systems accelerate core, human-only legal reasoning.
Although many now consider the traditional law schools program as inadequately preparing law students for the increasingly tech-savvy counsel required for by today’s market, there was a time when the most advanced and practical professional teaching was in law schools.
Christopher Columbus Langdell initiated both the case method and the Socratic method of teaching law in his Harvard Law class 150 years ago. Langdell’s goal was to induce the legal reasoning of actual cases through a series of specific questions (the Socratic method) that would expose the biases and preconceptions of the law student. In a common law system, this original source-first focus properly centralized the primacy of case law study over lecturing about a generalized legal subject, allowing students to deepen case-specific legal reasoning. The case study system, employed with the Socratic method, forces the student to challenge their own inferences to foster objective, less-biased legal decision-making. It was a truly disruptive approach to teaching that became the standard method of law school teaching (used to this day), and greatly influenced graduate school teaching in other areas.
Lawyers are notoriously scared of math, but it’s essential to accept how the independence and precision of mathematics will make them better lawyers. Attorneys must understand that the advanced artificial intelligence process underlying legal research in a deep learning system transforms words into numbers; more specifically words are mapped into vectors. This word embedding into vectors captures the degree of similarity of words across the vector space. It’s not a simple, static, one-to-one mapping of a word to a number, but a more complex and accurate representation of the many facets of that word as used in actual context. Humans use language as an imperfect signpost for more complex thought, but deep-learning AI, with its greater (if narrow) cognitive power, doesn’t have human language biases (apart from those implicit in that corpus of used words) or limitations in finding the clear mathematical similarities. The deep-learning system can guide the lawyer to otherwise untapped legal reasoning, enhancing the human-only legal advising, and making the humans better lawyers.
When the human lawyer engages with a dynamic, artificially intelligent, less-biased system that establishes legal connections—including some that a human may not yet have found—then the lawyer engaging with it will have stronger options in coming up with legal answers, and will understand (”learn”) that legal area in a deeper way. Although neural net deep-learning AI interactions differ from the Socratic method, the result in large part should be the same; an accelerated learning of how to objectively discern the best legal options, which will provide a stronger foundation for better lawyering. The system can illuminate options and facilitate a broader and deeper understanding of the legal issue.
This past fall, Alpha Go Zero (a self-teaching version of DeepMind’s Go software) did something extraordinary. In 40 days, it trained itself to master the Chinese board game, Go, without human intervention or training. It played millions of games against itself as it steadily increased proficiency, beating what any human could do, then beating any other machine. The only human input was the rules of the game, and the “loss function” programming which essentially determined whether each move tried would increase its chance of winning. On its own (though over the course of millions of games played by it, exploring and advancing proficiency) it developed (dare we say “evolved”?) strategies and skills that humans had come up with over hundreds of years. Not only did it independently learn the human strategies, but more importantly, it came up with new and better ways to win. Alpha Go Zero teaches us that the cognitive power of an artificially intelligent system can surpass human reasoning. Note this doesn’t make it “smarter” than humans—but certainly faster, and a potential “force multiplier” for human ingenuity.
Lawyers aren’t playing a game, but are indeed looking for connections in the common law; connections that can be clarified by translating them into fluid but less-biased algorithms. The fear of AI “jobpocalypse” should be replaced by the acceptance of a tool to better find and understand legal connections. The disruption of the Socractic method in exposing bias is overcome by the truth discerned by the AI system, searching through that manmade corpus of the common law, making it more open as well as more digestible.
About the Author
Patrick F. Gleason is an attorney. Follow him on Twitter @PatrickFGleason.