When Artificial Intelligence Discriminates: A Look at the First AI Employment Lawsuit and What it Means

Artificial intelligence (AI) is being used in nearly every aspect of business today, but at what cost? A recent lawsuit brought by the EEOC has demonstrated one of risks of using AI in the recruiting and hiring process; this case is only the first of these types of AI employment discrimination lawsuits.

 

The EEOC has been keeping a close eye on the potential challenges of AI and its use by employers. In 2021, the agency launched the Artificial Intelligence and Algorithmic Fairness Initiative, to ensure that the use of software, including artificial intelligence (AI), machine learning, and other emerging technologies used in hiring and other employment decisions comply with the federal civil rights laws that the EEOC enforces.”

 

In May, 2022, the EEOC sued three Chinese companies that do business as ITutorGroup, alleging that the companies’ AI recruitment software automatically rejects older applicants due to their age. ITutorGroup hires U.S. residents to provide online English tutoring to students in China. The complaint alleged that company software rejected women who were 55 or older and men who were 60 or over, rejecting more than 200 qualified U.S. applicants. In September 2023, the EEOC announced that it was settling the case for $365,000 and other relief; the money will be distributed to the rejected applicants.

 

We talked to several of Miles Mediation & Arbitration’s employment neutrals — Steve Dunn (Georgia), Donna V. Smith (Florida), and Tanya Tate (Georgia) — about their opinions about AI and its use in employment decisions, and the potential impact of this novel case.

 

Q: As companies begin to use AI more frequently, do you think we’ll see more claims and lawsuits like the ITutorGroup case?

 

Steve Dunn:  I expect to see more employment discrimination cases involving AI. With the adoption of any new technology, there is a learning curve during which issues arise that create liability but seem obvious in retrospect and easy to avoid. Eventually, every case will involve AI in some fashion, just as it is commonplace for cases today to implicate email, the internet, and social media.    

 

Donna V. Smith: Pre-employment assessments in the form of skills tests, personality tests, filters, and keyword searches have been around for decades in one form or another, and repeatedly challenged. Issues of validation and inherent bias permeated (and continue to permeate) the industry and use of these tests. Whether a fancier A” driven by algorithms will fall prey to these same challenges is to be seen but I cannot see how the same challenges will not apply. Relevance of the test/search/filter; disparate impact claims; fairness of testing administration; validation to establish the “test or AI run accurately measures that which it is intended to measure”; is the sorting performed by AI truly an accurate prediction of job performance? Whether written or verbal, is the evaluation reliable?

 

Tanya Tate: Without a doubt, these claims will increase in the future. Studies show that approximately 50% of companies are now using AI to make or assist with making recruitment, hiring, promotion, retention and demotion decisions, and that number is increasing daily. The EEOC is already gearing up in anticipation of an inevitable increase in these types of clams by issuing guidelines for employers that relate specifically to AI in employment and announcing their Artificial Intelligence and Algorithmic Fairness Initiative earlier this year. In fact, last month the EEOC Chair Charlotte Burrows called the use of AI technology the “new civil rights frontier.”

 

The EEOC, in these new guidelines, has already deemed the use of AI to be a “selection process” under Title VII that accordingly must not have the effect of disproportionately excluding persons based on race, color, religion, sex, or national origin. However, one factor that may keep the number of claims down, as compared to other types of claims filed with the EEOC, is the opaque nature of the AI selection tools used such that many employees may never know they have been the victim of discrimination.

 

Q: How can companies use or rely on AI without running afoul of employment laws? 

 

Steve: If the EEOC’s allegations are true, the AI system iTutorGroup used may have been “artificial” but it certainly was not “intelligent.” Automatically screening out applicants over a certain age is obviously not allowed. While it may be straightforward to program an AI not to have a discriminatory intent, it can be tricky to avoid having a discriminatory impact. For example, an AI may rank applicants by educational credentials, but if a college degree, for example, is not truly necessary for the job, the AI may inadvertently overlook qualified candidates.

 

Donna: Stress and stress again at every level — from the board, to the C-suite, to front line managers, and employee groups (including unions) — that screening processes, tests, et cetera re reliable, validated, and keyed very specifically to the jobs at issue. And that it is not a one-track single elimination round — that there is a review and assessment of resumes and candidates who are not selected for the next step to determine why. This goes to the issue of validating the process. Carefully carve out positions that are not amenable to AI screening (or a truncated version) and those which are. For example, for highly technical or IT jobs, do you know the code or not? Can you pass a skills test as a welder? This secondary review of course carries with it a risk that inherent bias comes back to life and otherwise “unqualified” people are hired based on the human assessment factors (which AI was supposed to avoid!).

 

Tanya: First, choose AI vendors carefully. AI relies upon an “algorithmic/automatic decision-making tool (AEDT)” to make, or assist with making, employment decisions. While the computers that run these algorithmic decision-making tools obviously do not hold biases, it is important to remember that humans do the programming and can sometimes instill some elements of bias. Most companies use an AI vendor to assist, or to completely perform the AI selection process functions. So, companies should, when selecting an AI vendor: 1) ask potential vendors what has been done to protect against human bias in the AI product; and 2) inquire as to whether steps have been taken to evaluate whether use of the tool causes a substantially lower selection rate for individuals with a characteristic protected by Title VII.

 

The importance of vendor selection cannot be overstated considering that the EEOC, in its Guidelines, indicated that employers can still be liable even if their AI selection platform was developed by an outside vendor.  In addition, employers may be held responsible for the actions of their agents, which may include software vendors, if the employer has given them authority to act on the employer’s behalf. This may include situations where an employer relies on the results of a selection procedure that an agent administers on its behalf.

 

Second, conduct AI bias audits. There are now companies that perform AI bias audits of technology platforms used for hiring or promotion decisions.  These audits are designed to ensure AI tools used in such systems don’t discriminate on the basis of sex, race and ethnicity.  Conducting an annual AI audit is one way for companies to take a proactive role in ensuring both internally developed AI tools and any AI used in vendor systems for employment decisions are regularly audited by qualified third-party auditors for bias.

 

Of note is the fact that New York recently passed a law requiring certain New York City employers to conduct annual AI bias audits and publish the results on their website.  It is likely requiring AI audits will slowly, but surely, become the rule rather than the exception.

 

Q: Why are mediation and arbitration a good option for employment-related cases? 

 

Steve: Employment cases aren’t like car crashes where the parties usually never see each other again. They involve ongoing relationships. Even where employment relationships have ended, employers have to manage remaining employees who are familiar with the situation and live with whatever precedent has been set. These cases are perfect for the flexible, thoughtful resolutions that can be achieved through settlement at mediation.

 

Donna: There are a number of reasons, including the emotional investment in the case, and the need to be heard/vent/acknowledge these feelings. Also, the time it takes to resolve a claim through administrative process or litigation is not feasible when lack of employment is at the root and the need to move on is key.

 

Tanya: AI bias cases are particularly well suited for mediation as there is almost no case authority for parties to rely upon when trying to value a case.  This near total lack of guidance makes litigation even more unpredictable than normal and makes walking clients through a risk-reward analysis virtually impossible.

 

 

ABOUT MILES MEDIATION & ARBITRATION

Miles Mediation & Arbitration is shaping the alternative dispute resolution (ADR) industry with our comprehensive professional services model that combines the expertise of our highly skilled, diverse panel of neutrals with an unparalleled level of client support to guide and empower parties to fair, timely, and cost-effective resolution regardless of case size, specialization, or complexity. For more information, please call 888-305-3553 or email support@milesmediation.com.

[instagram-feed]