AI and People with Disabilities: The Good and the Bad

Recently, NPR said that within the next five years, 25% of all jobs will be disrupted by artificial intelligence (AI). We are already seeing disruptions in the legal profession. For example, case management companies, such as Clio, have rolled out AI features. Casetext has been using AI in its legal research search engine for quite a while, and recently developed a tool marketed as a virtual paralegal. I have been using AI for years as a user of voice dictation technology. In the first part of this article, I will review how AI is changing the practice of law. The second part of the article covers how AI can be used in discriminatory ways, and how to make sure that doesn’t happen. The EEOC has made it very clear in its strategic plan and in its publications that it is closely following the use of AI , as are other regulatory bodies.

How AI is Changing the Practice of Law

When you think of what lawyers do, particularly those who do not appear in the courtroom (which is most of us), we talk to clients, do legal research, read, and write. Legal research is already being driven by AI. With respect to writing, I have seen fellow legal bloggers experiment with using AI to write a blog entry. The results to date were just okay, but that is now. Kevin O’Keefe of LexBlog, which is the platform I use for my blog/website, recently wrote an article  about the inevitable growth of this use case.  In that article, he mentioned several ways that AI could help blogging, including: content generation; editing and proofreading; content optimization; personalization; and image and video creation. As an inveterate blogger who is very jealous of my own writing style, I am not going to have AI write my blog. However, I could see how it could be useful for generating content ideas and especially for suggesting content optimization, which has always been a bit of a mystery for me. With respect to visual content and videos, I always get concerned about accessibility for people with disabilities.

In another article I saw recently,  Laura Lorek said she sees AI being used in a variety of ways in the legal profession, including: automating routine tasks such as document review and contract analysis for lawyers; enhancing legal research and writing to increase efficiency; and blogging. She expects that AI will very definitely affect the billable hour.

I also recently read an article in the Chronicle of Higher Education by a student about ChatGPT, including using the tool to structure an outline for any paper, with the student supplying the rest. Easy enough to see how a brief or a memorandum of law might be put together that way.

Another article by Greg Lambert suggests that lawyers could use AI in many ways, including: developing critical thinking skills; enhancing legal research skills; developing writing skills; and business development skills.

And an article by Sean La Roque-Doherty he points out that companies are already using AI to scour the Internet and capture and filter vast amounts of information about potential jurors. The article noted that AI can help jury consultants and lawyers analyze case information, biographical data from jurors questionnaires and community surveys to help pick jurors. Also, data analytics and visualization tools can surface data for litigants to review and allow them to frame the right question in order to get the correct answers to challenge jurors for cause or use peremptory strikes. Another company offers social media surveillance searches on prospective jurors. AI can help train attorneys in jury selection by using simulated data sets gathered from surveys. Even so, Sean notes that AI in jury selection is imperfect, because human beings are still involved and it is impossible to anticipate every different thing that could go into a decision by a human being. I previously have seen similar tools with respect to judges.  I am not a litigator, but I will confess that it is scary to think about AI tools being used in jury selection and with respect to judicial opinions or anticipating judicial outcomes, especially since both sides of a case often do not always have the same resources.

As I was drafting this article, I received an email from Lex Machina introducing new legal analytics for real property litigation. In particular, it claimed that Lex Machina will help answer strategic questions such as:

  1. Over the last three years, what land condemnation cases had the highest amount of real property just compensation damages awarded? What were the specific amounts and what happened in those cases?
  2. Which districts had the highest number of real property cases filed that involve torts to land claims?
  3. How often do claim defendant prevail in judgment on the pleadings in real property cases involving rent/lease/ejectment claims in the Central District of California?
  4. Which law firms have the most experience representing defendants in real property cases involving foreclosures before Judge Virginia Phillips?
  5. What is the median time to termination for real property cases involving land condemnation presided over by Judge Brian Martinotti?
  6. Who are the most active plaintiffs in the Southern District of New York?

It is not a stretch to see how this kind of legal analysis could be used across a variety of subject areas. In fact, if you go to the Lex Machina website, it claims (I have not used the product myself), to be able to do the following: 1) analyze courts and judges; 2) evaluate opposing counsel; 3) evaluate parties in your matter; 4) and help you craft winning case strategy.

AI and Discrimination

Recently, the DOJ and the EEOC issued guidance on AI recognizing that it can be used for nefarious purposes. The DOJ guidance noted the following: 1) DOJ enforces disability discrimination laws with respect to state and local government employers; 2) DOJ will look seriously at whether the AI tools screens out persons with disabilities; and 3) employers must use accessible tests mentoring the applicant’s job skills and not the disability, or they must make other adjustment to the hiring process so that a qualified person is not eliminated because of a disability.

The EEOC guidance is well worth the read and can be summarized as follows: 1) the guidance defines software, algorithms, and artificial intelligence; 2) as with all guidance, you still have to look at it critically. For example, the EEOC guidance has a misplaced focus on “current disability.” It also has a reference to “painful,” which is interesting in light of the case law out there with respect to painful allegations and reasonable accommodations; 3) the ADA is a nondelegable duty; 4) don’t forget about the disability-related inquiries and medical examination scheme of the ADA; 5) you can’t use AI to screen out people with disabilities; 6) remember reasonable accommodation obligations and that magic words are not required to start the interactive process of seeking a reasonable accommodation; 7) transparency of the AI tool is important; 8) be careful about requesting excessive documentation; 9) be aware of the risk of chatbots and then seizing on gaps in employment or unusual speech patterns (it is not unusual for people with disabilities to have unusual speech patterns); 10) algorithms being free of bias for purposes of Title VII is not the same thing as that algorithms being free of bias with respect to disability discrimination; 11) never forget about essential functions of the job and the need for an individualized analysis; and 12) don’t forget about the medical examination/disability related inquiries scheme that goes with Title I of the ADA.

The EEOC, DOJ, Consumer Finance Protection Bureau, and the Federal Trade Commission also issued a Joint Statement on AI Discrimination and Bias. In that statement, they said that they would be focusing on the following:

  • Data and data sets. Automated system outcomes using representative or imbalanced data sets, data sets incorporating historical bias, or data sets containing other types of errors. Automated systems can also correlate data with protected classes, which can also lead to discriminatory outcomes.
  • Model opacity and access. Many automated systems are black boxes whose internal workings are not clear to most people, and even in some cases to the developer of the tool. That lack of transparency makes it all the more difficult for developers, businesses, and individual to know whether an automated system is fair.
  • Design and use. Developers do not always understand or account for the context in which private or public entities use their automated systems. Developers may design a system on the basis of flawed assumptions about its users, and the underlying practices or procedures that the AI tool may replace.

What are some preventive steps for avoiding AI discrimination? For that, I recommend a law review article by  EEOC Commissioner Keith Sonderling along with Bradford J. Kelly and Lance Casimir. In that article, the authors suggest several steps that can be taken to ensure that AI is beneficial rather than discriminatory, including keeping the following in mind:

1) Know your data. Be vigilant about developing, applying and modifying the data used to train and run the algorithm used to screen and evaluate potential candidates and applicants in recruiting programs. Also, the data should be as complete as possible with no missing or unreliable factors, and be voluminous enough to provide statistically relevant results. Using AI for employment decision-making requires avoiding potentially biased data from sources such as social media and data brokers, as those could be potentially error-prone.

2) Monitor and audit AI uses both qualitatively and quantitatively continually and/or at least once a year, and memorialize the findings.

3) Supervise the process. Charge a person or team of people with overseeing the processes and results of AI tools in order to ensure that they are not only performing legitimate objectives, but also avoiding improper outcomes.

As mentioned at the top of this article, AI will disrupt 25% of all professions within the next five years. Law will certainly be one of them. Certainly, AI will help attorneys and attorneys with disabilities. However, AI could also be easily used for nefarious purposes. Government entities are most definitely watching. Even the creators of ChatGPT are calling for regulations with respect to how it is used. A bipartisan group of senators is suggesting that a separate regulatory agency needs to be created to deal with it all. So, AI is definitely here to stay, and it’s going to be rapidly moving in terms of its technology and regulatory developments. While the internet was allowed to develop by itself without much regulation, AI will probably not get the same leeway.

About the Author

William D. Goren has been dealing with the ADA as an attorney since 1990. His law and consulting practice, as well as his blog, Understanding the ADA,  all focus on understanding what it means to comply with that law and related laws.

Send this to a friend