By Pat Murphy, BridgeTower Media Newswires
BOSTON — As employers increasingly use automated systems to make decisions about hiring, firing, promotions and salaries, the U.S. Commission on Equal Opportunity is raising a red flag on the risks of discrimination based on disability, race, sex and age, which are posed by the use of artificial intelligence technologies arise in the management of the workplace.
On January 31, EEOC held a public hearing in Washington entitled “Navigating Employment Discrimination in AI and Automated Systems: A New Civil Rights Frontier.”
“The aim of this hearing was to educate a wider audience on the civil rights implications of the use of these technologies and to identify the next steps the Commission can take to prevent and eliminate unlawful bias in the use of these automated technologies by employers .” EEOC Chair Charlotte A. Burrows said in a statement. “We will continue to educate employers, workers and other stakeholders about the potential for unlawful bias so these systems do not become high-tech avenues of discrimination.”
An employment attorney who shares these concerns is Monica R. Shah of Boston.
“One of my concerns is that companies could use AI as a shield or as a means to do so [deflect] Accountability for decision-making,” said Shah, who represents employees in discrimination cases. “Ultimately, the decision to take an adverse action against an employee is the responsibility of the company itself.”
Attorney Matthew H. Parker of Providence, Rhode Island, has never seen a case challenging an AI-based hiring decision. However, he recognizes that the risk of discrimination from reliance on such systems is real. And while the plaintiffs’ attorneys may worry that employers will seek to use AI as a way to protect themselves from discrimination claims, Parker said it’s not that simple.
“While you can reduce the risk of intentional discrimination by relying on a computer to screen or rank employees, it’s very possible that your algorithm could have differential effects on employees in protected groups if you don’t have quality controls in place” , he told Parker, whose employment law practice includes advising companies on hiring, firing, paying and managing employees.
Though the technology may be new, David I. Brody, president of the Massachusetts Employment Lawyers Association, attributes a plaintiff’s success in a case involving AI-based decision-making to the familiar challenge of uncovering sufficient evidence of discriminatory animus.
“From what I’ve read, what really sets AI apart is that they’re eerily capable of being just as horrible as humans,” Brody said. “So if the AI is really trying to mimic the human approach, the bias will be reflected in the AI’s behavior as well. And there will be evidence of that.”
The public hearing conducted by EEOC was part of the agency’s AI and Algorithmic Fairness Initiative. Launched in October 2021, the initiative aims to ensure that the use of AI and other emerging technologies in employment decisions is consistent with federal civil rights laws.
Last May, EEOC achieved a major milestone in the program by issuing a technical assistance document addressing how the Americans with Disabilities Act affects the use of AI by employers in decision-making related to the workforce is applied.
Shah said the new guidelines are important because they highlight concerns about disability and reasonable accommodation when it comes to AI computations.
“They ensure that accommodations for employees who may be furloughed or have housing due to disabilities are taken into account in AI systems,” Shah said. “It’s something that needs to be tracked and monitored.”
In its guidance, the EEOC adopts the definition of AI used in the National Artificial Intelligence Act of 2020. Under Section 5002(3) of the Act, Congress defined AI as a “machine-based system designed for a specific set of human-defined goals, make predictions, recommendations, or decisions that affect real or virtual environments.”
In the employment context, AI typically relies, at least in part, “on computational data analysis” to determine what criteria to use in employment decision-making, the EEOC guide explains.
“AI can include machine learning, computer vision, natural language processing and understanding, intelligent decision support systems, and autonomous systems,” the technical document states.
The EEOC guidelines define “algorithm” as a set of instructions that are followed by a computer to achieve a specific goal.
“Human resources software and applications use algorithms to enable employers to process data to evaluate, score, and make other decisions about job applicants and employees,” the document reads.
The EEOC guidance identifies the three most common ways in which an employer’s use of algorithmic decision-making tools “might” violate the ADA.
First, an ADA violation can occur when the employer fails to make reasonable accommodations necessary for an “applicant or employee to be fairly and accurately evaluated by the algorithm.”
Second, an employer may violate the ADA by relying on algorithmic decision-making aids that “intentionally or unintentionally” single out an individual with a disability even though that individual is able to do the job with reasonable accommodation.
Third, the EEOC Technical Guidance states that the employer’s algorithmic decision-making tool may conflict with the ADA’s limitations on disability-related exams and medical exams.
Brody compared potential lawsuits over AI to previous lawsuits over civil service exams.
“[Government employers] tried to make it a performance-based exam that was face-neutral, and [the tests] was eventually arrested [as] discriminatory in many ways,” Brody said. “I appreciate that AI is a new twist on an old problem, but just because there’s a metrics-based tool doesn’t mean it suddenly [employers] protect themselves from bias.”
But for Shah, the use of AI in hiring decisions poses a real risk of abuse and abuse by employers.
“The problem with AI is that it’s only as good as the information it’s based on,” Shah said. “It can be affected by subjective decisions by managers who include discriminatory grounds in performance appraisals.”
Parker has identified a number of ways that bias can affect an automated system if the employer does not take necessary precautions.
For example, Parker pointed to AI algorithms that rely on metrics, which include the regular performance reviews employees receive from their managers.
“Let’s say the supervisor rates the employees from one to five once or twice a year and the computer [singles out] People who rank lower than three or four,” Parker said. “If the manager is inherently biased, then that bias is built into your algorithm.”
Because customers can also be biased, an algorithm that relies on customer reviews can be similarly flawed, he added.
Likewise, Parker said, an algorithm that overemphasizes wages could produce results that lead to claims of discrimination.
“It can have different effects because certain employees in certain protected groups earn less than white males because of the discrimination built into the system,” he said.
Vigilant human supervision
Parker said it’s crucial that employers verify the quality of data going into an algorithm.
“It’s important that once you’ve thrown out a product, you look at it critically and not just assume there is no discrimination,” he said. “And you have to make sure it doesn’t have a disparate impact on employees in any given group.”
Certain metrics — including sales made, hours billed, or calls answered — seem to “speak for themselves,” Parker noted. However, he warned that even the most seemingly objective data must be checked by the employer to minimize the risk of discrimination claims.
In that regard, Shah said that an employee’s “poor” performance, as indicated by the “objective” data, could be the result of a biased manager not giving the employee the quality of accounts or work orders that other employees are given .
“The reason someone hasn’t answered as many calls could be because they have a disability and the employer needs to provide them with reasonable accommodation,” Parker said. “Theoretically, an objective data set that goes into the algorithm can result in a product that is implicitly biased.”
According to Shah, vigilant human oversight will be needed to ensure an employer’s AI system doesn’t become infected by bias.
“A lot of information can be fed into the system over the years, and the system develops and evolves based on that information,” she said. “The question is, will there be checkpoints — human managers who actually look at the system and make sure they’re acting objectively and unbiasedly?”
Brody pointed out that human intervention is an aspect of AI decision-making that, while unavoidable, also provides plaintiffs’ attorneys with the leverage they need to establish a case of discrimination.
“Human intervention will be part of every single hiring decision,” Brody said. “And once you’ve involved human intervention, you have the opportunity to be biased.”
He added that this dynamic is also at play during a downsizing, where the AI provides the names of employees to be laid off based on a set of supposedly objective metrics.
“What metrics are chosen and what metrics are relied on can be the basis for discriminatory animus and become the cause of a lawsuit,” Brody said.
RELATED: Chat GPT for Lawyers: Pros and Cons