The use of artificial intelligence in employee selection processes: updated guidelines from the EEOC

As we previously reported, the Equal Employment Opportunity Commission (“EEOC”) is aware of the potential harms that artificial intelligence (“AI”) can cause in the workplace. While some jurisdictions have already enacted requirements and restrictions on the use of AI decision-making tools in employee selection methods,[1] On May 18, 2023, the EEOC updated its guidance on the use of AI for employment-related decisions and published a technical assistance document entitled “Selected Problems: Assessing the Adverse Effects of Software, Algorithms and Artificial Intelligence Used in Employment Selection Processes under the title” . VII of the Civil Rights Act, 1964” (“Updated Guidelines”). The updated guidance comes nearly a year after the EEOC published guidance explaining how employers’ use of algorithmic decision-making tools may violate the Americans with Disabilities Act (“ADA”). The updated guidance instead focuses on how the use of AI can impact Title VII of the Civil Rights Act of 1964, which prohibits discrimination in the workplace based on race, colour, religion, sex and national origin. In particular, the EEOC focuses on the different impacts that AI can have on “selection processes” for hiring, firing and promotion.

A background of Title VII

As a brief background, Title VII was enacted to protect applicants and employees from discrimination based on race, color, religion, sex and national origin. Title VII is also the law that created the EEOC. In its nearly 60 years of existence, Title VII has been interpreted to include protection from sexual harassment and discrimination based on pregnancy, sexual orientation, gender identity, disability, age and genetic information. It prohibits discriminatory acts by employers in employment-related decisions, such as those relating to hiring, hiring, supervising, promotion, transfer and dismissal of workers. Under Title VII there are two main categories of discrimination: (1) differential treatment, which relates to an employer’s intentional discriminatory decisions, and (2) differential effects, which relates to unintentional discrimination based on an employer’s behavior patterns and practices. As mentioned above, the EEOC’s updated guidance focuses on the impact AI can have on the latter.

The EEOC’s updated guidance on using AI tools for decision-making

The updated guidance provides important information to help employers understand how the use of AI in “selection processes” may expose them to Title VII liability, as well as some practical tips on how to limit liability.

First of all, it is important for employers to understand whether they are using AI decision-making tools in their “selection processes” as defined in Title VII. The EEOC clarifies that a “selection process” is “any action, combination of actions, or process” that serves as the basis for a hiring decision. In other words, the EEOC considers a selection process to be all decisions made by the employer that affect a worker’s position in the company, from the worker’s application to the point of dismissal.

Examples of AI-based decision-making aids that employers can use in selection processes are:

resume scanners that prioritize applications based on specific keywords; monitoring software that rates employees based on their keystrokes or other factors; “virtual assistants” or “chatbots” that ask applicants about their qualifications and reject those who do not meet pre-defined requirements; video interview software that evaluates candidates based on their facial expressions and speech patterns; and testing software that provides applicants or employees with “job fitness” ratings in terms of personality, aptitude, cognitive ability, or perceived “cultural fit” based on their performance on a game or a more traditional test.

Second, the EEOC explains how employers can and should screen their AI-driven selection processes for adverse effects. If an AI-driven method results in members of a particular group being selected at a “materially” lower “selection rate” compared to persons of another group, then Employer’s use of that tool violates Title VII. A “selection rate” is the proportion of applicants or candidates who are actually hired, promoted, fired, or otherwise selected. It is calculated by dividing the total number of applicants or candidates selected from a given group by the total number of applicants or candidates in that group in total. As a general rule of thumb, a particular group’s selection rate is “significantly” lower if it is less than 80 percent, or four-fifths, of the selection rate of the most favored group. The EEOC aptly calls this the “four-fifths rule”. However, the EEOC warned that adhering to the “four-fifths rule” was no guarantee of a compliant selection methodology. “Courts agree that applying the four-fifths rule is not always appropriate, particularly when it is not a useful substitute for a test of statistical significance.”[2]

Third, EEOC reiterated that yes, just as an employer can be held liable for violations of the ADA for using AI decision-making tools developed or maintained by third parties, so can Title VII violations. Relying on a software vendor’s assurances does not absolve an employer of liability if the software results in a “materially” lower selection rate for certain groups.

Finally, the updated guidance clarifies that employers should also assess their use of AI tools in relation to the other stages of the Title VII differential impact analysis, including “whether a tool is a valid measure of important occupational characteristics or characteristics”. .

Practical tips for employers

Require your employees to obtain approval before using algorithmic decision-making tools so that you can carefully review the tool. Earlier in this article, we explained why employee policies should be updated to account for the use of AI tools. Conduct regular audits to determine whether the tools you use have different impacts and, if so, whether they are linked to relevant job-related skills and are aligned with business needs. Ask your software vendors of these tools to disclose what steps they took to assess whether using the tool might produce different effects, and specifically whether it relied on the four-fifths rule or whether it relied on a Standard based on statistical significance can also be used by courts;[3]
Make sure your supplier contracts include adequate indemnification and cooperation provisions in case your use of the tool is questioned. Make sure your employees receive proper training on how to use these tools. and if you outsource or rely on third parties to conduct selection processes or act on your behalf to make employment-related decisions, require them to disclose their use of AI decision-making tools so that you can properly assess your exposure.

The central theses

As AI continues to evolve at an alarming rate, employers need to make adjustments to ensure they are using the technology responsibly, lawfully, and non-discriminatoryly. Although AI can speed up the selection process and even reduce costs, relying on AI without due diligence can be problematic. Ultimately, employers, not software developers and vendors, are responsible for ensuring that the selection rate for a particular group of people is not “significantly” lower. Employers need to be critical of the selection methods they use, from the application phase through to separation and transfer. Employers should continue to review the use of these tools and ensure their employee policies and supplier agreements are updated to minimize their exposure to liability under Title VII and other labor laws. If adjustments or changes are required, employers should adapt and work with their suppliers to ensure they use the least discriminatory methods or can justify their decisions as job-related and consistent with business needs.