The EEOC issues new guidance on the use of artificial intelligence in recruitment | Bricker Graydon LLP

On May 18, 2023, the Equal Employment Opportunity Commission (“EEOC”) issued guidance on the use of artificial intelligence (“AI”) in job selection to comply with Title VII (“Title VII”), the federal Worker Protection Act and Employees applicants against discrimination in the workplace based on race, color, religion, sex and national origin. Specifically, this Guide (“Title VII Guide”) focuses on the use of AI by employers in selection processes, considerations for assessing different impacts on employees based on race, religion, gender and/or national origin, and the associated risk for employers the use of third-party AI products or the use of AI providers to assist in recruitment.

The Title VII Guidelines follow the guidelines issued by the EEOC last year on the use of AI, but with due regard for the Americans with Disabilities Act (“ADA”) (“the ADA Guidelines”). Launching the Algorithmic Fairness Initiative in 2021, EEOC Chair Charlotte Burrows stated: “Distortions in employment created by the use of algorithms and AI clearly fall within the Commission’s priority of tackling systemic discrimination.”

Both Title VII and the ADA Guidance consider measures that may conflict with Title VII as interpreted by the EEOC when using AI, defined as “a machine-based system designed for a specific set of human-defined targets, predictions and recommendations”. or decisions affecting real or virtual environments.”

Both the Title VII Guidance and the ADA Guidance provide the same examples of when an employer may use AI in an employment situation:

Resume scanners that prioritize applications based on certain keywords Employee monitoring software that rates employees based on their keystrokes or other factors Virtual assistants or chatbots that ask candidates about their qualifications and reject those who don’t meet certain predefined requirements Video interviewing software, which evaluates candidates based on facial expressions and speech patterns. Testing software that provides applicants or employees with “job fit” ratings based on their personality, skills, cognitive abilities, or perceived “cultural fit.”

The Title VII guidance focuses on when the use of AI may have different or adverse effects on an employee or prospective employee based on race, color, religion, gender or national origin. The guidance notes that an employer can assess whether the use of AI has different or negative effects on employee selection in the same way that it assesses any selection process – by examining, whether or not the process has a significant impact on a particular protected category and the use of the traditional “four-fifths rule” as a helpful but not definitive tool, unless the employer can demonstrate that such use is “occupational and consistent with business needs”.

As the EEOC explains, the “four-fifths rule,” also known as the disparate impact ratio, is a general, illustrative rule and may not always be appropriate, but it can help employers determine if a selection rate of a group is appropriate. differs “significantly” from the selection rate of another group. According to the EEOC, a selection rate is considered significantly different when the ratio for a group is less than four-fifths (80%). In the context of AI, the EEOC provides an example of an AI-scored personality test with a choice rate [1] 30% for black applicants and 60% for white applicants. In this scenario, the EEOC concludes that the selection rate for black applicants is significantly different from the selection rate for white applicants, as 30/60 (50%) is less than 4/5 (80%), which they could take as evidence of the Discrimination against black applicants. The EEOC recognizes that this four-fifths rule is a rule of thumb, not law, and may not be appropriate in all circumstances. [2]

Applying the same standards to differential impacts of AI as other process decisions that may have differential impacts, the EEOC, as in the previous ADA guidance, finds that an employer is likely to be responsible for all differential impacts of AI, even if the AI created or maintained by a third party. The EEOC noted that if an employer chooses to rely on third-party AI, “at the very least, they may want to ask the vendor if [the vendor took steps] to assess whether the use of AI resulted in significantly lower selection rates in individuals with a [protected] characteristic”. The EEOC noted that even if the provider assures the employer that their tool does not have differential effects on a protected category, if the provider is wrong, the employer could still be liable. In this regard, the EEOC notes that that when the employer develops its own AI to make employment choices, the employer chose a version that had different effects on one or more protected classes and could have chosen a version with less different effects to be responsible.

Finally, and unsurprisingly, the EEOC directs an employer to continually evaluate any AI they use to ensure it is not having an unequal impact on a protected group of people, and must ban the use of AI that has an unequal impact on a protected group of people , unless the employer can demonstrate that the use of AI is “job-related and consistent with a business need.”

As employers consider new ways to attract talent, they should do so with caution and ensure those responsible for selecting and purchasing tools with AI understand the potential risks associated with their use.

[1] “Selection Rate” is the proportion of Applicants or Candidates who are hired, promoted or otherwise selected. It is calculated by dividing the number of people hired, promoted or otherwise selected from a group by the total number of candidates in that group. This is the example of the EEOC in the Title VII Guidance:

Suppose 80 white and 40 black applicants take an algorithm-based personality test as part of an application, and 48 of the white applicants and 12 of the black applicants advance to the next round of the selection process. Based on these results, the selection rate for whites is 48/80 (equivalent to 60%) and the selection rate for blacks is 12/40 (equivalent to 30%).

[2] On January 31, 2023, the EEOC heard testimony related to this AI initiative, in which several academics testified about the widespread abuse of the four-fifths rule by 45-year-olds and advocated abandoning it.

[View source.]