
6 minute read
Employment bar casts wary eye on AI
As employers increasingly use automated systems to make decisions regarding hiring, firing, promotions and pay, the U.S. Equal Employment Opportunity Commission is raising the red flag on the risks of disability, race, gender and age discrimination posed by reliance on artificial intelligence technology in managing the workplace.
On Jan. 31, the EEOC held a public hearing in Washington titled “Navigating Employment Discrimination in AI and Automated Systems: A New Civil Rights Frontier.”
“The goals of this hearing were to both educate a broader audience about the civil rights implications of the use of these technologies and to identify next steps that the commission can take to prevent and eliminate unlawful bias in employers’ use of these automated technologies,” EEOC Chair Charlotte A. Burrows said in a statement.
“We will continue to educate employers, workers and other stakeholders on the potential for unlawful bias so that these systems do not become hightech pathways to discrimination.”
Joshua Van Kampen, a Charlotte employment attorney who practices in North and South Carolina, shares those concerns.
“I’m really worried about it,” Van Kampen says. “As a society, we haven’t solved the discrimination puzzle about how to root out implicit or explicit bias from hiring decisions or personnel decisions. The notion that we’re going to turn over these hiring decisions to artificial intelligence just seems like a recipe for problems.”
While AI technology in theory could be a helpful tool in minimizing bias, the reality has proven to be far more complex, according to Sean Herrmann of Charlotte’s Herrmann & Murphy.
“If there is AI that can take some of that human subjectiveness out [of the hiring process], in theory that would have a big impact on discrimination law,” Herrmann says. “If the goal is less discrimination, there are ways that I would think companies could use [AI] to eliminate some of these biases that people have. [But] I don’t know how you ever take the human element completely out of it. To the extent that there’s this push for ‘if there’s no human discriminating, then it can’t be discrimination’ — I’m extremely suspicious of that.”
And while many companies are claiming the goal of utilizing AI is to remove human bias from employment decision-making, Van Kampen isn’t buying those claims.
“The intent of these artificial intelligence hiring programs is not to root out discrimination; it is to save employers money, by trimming down their talent and acquisition personnel,” Van Kampen says. “It’s money driven. It’s not driven with any sort of societal purpose to help root out discrimination.”
While the technology may be new, David I. Brody, president of an employment lawyers’ association in Massachusetts, sees a plaintiff’s success in a case involving AI-based decision-making as coming down to the familiar challenge of uncovering sufficient evidence of discriminatory animus.
“From the reading I’ve done, what actually makes AI different is that it is eerily capable of being just as terrible as humans,” Brody says. “So if AI is truly attempting to mimic the human approach, then bias will be reflected in AI’s conduct as well. And there will be circumstantial evidence to show it.”
New guidelines
The public hearing conducted by the EEOC was part of the agency’s AI and Algorithmic Fairness Initiative. Launched in October 2021, the initiative is aimed at ensuring that the use of AI and other emerging technologies in making employment decisions comply with federal civil rights laws.
Last May, the EEOC reached a major milestone in the program by issuing a technical assistance document addressing how the Americans with Disabilities Act applies to an employer’s use of AI in its workforce decision-making.
Julian H. Wright Jr. of Robinson, Bradshaw & Hinson in North and South Carolina says concerns have been flagged regarding disability and reasonable accommodations with respect to AI calculations.
“Very few if any of these programs or third-party vendors offering these services are geared toward making reasonable accommodations,” Wright observes. “If there’s no way to measure whether or not a person can do a job with a reasonable accommodation, then if a person does in fact need a reasonable accommodation, they’re not going to be one of the top five people generated by the program or the AI application.”
In its guidance, the EEOC adopts the definition for AI used in the National Artificial Intelligence Act of 2020. Under § 5002(3) of the act, Congress defined AI to mean a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”
In the employment context, AI typically relies, at least in part, “on the computer’s own analysis of data” to determine which criteria to use when making employment decisions, the EEOC guidance explains.
“AI may include machine learning, computer vision, natural language processing and understanding, intelligent decision support systems, and autonomous systems,” the technical document states.
The EEOC’s guidance defines “algorithm” as a set of instructions followed by a computer to accomplish some end.
“Human resources software and applications use algorithms to allow employers to process data to evaluate, rate, and make other decisions about job applicants and employ-
See Page 6 ees,” the document states.
The EEOC guidance identifies the three most common ways in which an employer’s use of algorithmic decision-making tools “could” violate the ADA.
First, an ADA violation may occur when the employer fails to provide a reasonable accommodation necessary for a “job applicant or employee to be rated fairly and accurately by the algorithm.”
Second, an employer may violate the ADA by relying on algorithmic decision-making tools that “intentionally or unintentionally” screen out an individual with a disability, even though that individual is able to do the job with a reasonable accommodation.
Third, the EEOC’s technical guidance states that the employer’s algorithmic decision-making tool may run afoul of the ADA’s restrictions on disability-related inquiries and medical examinations.
Flawed algorithms
Van Kampen sees the use of AI in employment decisions as posing a tangible risk of misuse on the part of employers that could result in discriminatory results.
“Who created the artificial intelligence? Who created the algorithm? A person did,” Van Kampen says. “Let’s say that an employer is trying to target an algorithm and has added certain data points that they think are pertinent for a candidate. That completely ignores the notion that the criteria that they’re searching for could have a disproportionate impact on people in particular minority groups.”
Van Kampen identifies a number of ways that bias can taint an automated system should the employer fail to take the necessary precautions.
For instance, he points to AI algorithms that rely on metrics that include prioritizing candidates who attended four-year universities.
“A lot of people that come from challenged socio-economic backgrounds may have a community college degree — an associate’s degree — before a bachelor’s degree,” Van Kampen says. “Not because they’re not smart, [but] because they couldn’t afford a four-year college institution. If the folks that tend to go to community college before going to a four-year [school] are disproportionately African American or Latino, for example, and that is a data point in your selection algorithm, then you’re going to have a potential disproportionate negative impact on people that fall in those groups.”
Brody, meanwhile, analogizes potential litigation over AI to prior litigation over civil service examinations.
“[Government employers] tried to make it a performance-based examination that was facially neutral, and [the tests] ended up being held [as] discriminatory in a number of different ways,” Brody says. “I appreciate that AI is a new twist on an old problem, but just because there is some metrics-based tool in place doesn’t mean that suddenly [employers] are insulating themselves from bias.”
Vigilant human oversight
Wright believes it’s critical that employers be vigilant about what data is being fed into an algorithm.
“You’ve simply got to be careful about what your artificial intelligence is going to be learning from,” the North and South Carolina attorney says. “If that pool of information is defect or lacking in some way, then you’re going to get results kicked out that are lacking.”
Though Van Kampen has not yet been called on to offer counsel in a case involving AI discrimination, he attributes the lack of legal inquiries to another issue.
“I think we [are] dealing with a clientele who are largely ignorant about the use of AI in the hiring process,” he says. “Right now, there are no federal laws requiring employers to disclose its use. So people that are subjected to artificial intelligence algorithms and hiring facial recognition are very likely unaware of its use — and therefore ignorant as to its impact.”
Herrmann has similar concerns.
“I can see that it adds to this fiction that people are just cogs in the machine,” Herrmann says. “They’re not; they’re people.”











