The use of artificial intelligence (AI) in employment decision-making is on the rise, with Equal Employment Opportunity Commission (EEOC) Chair Charlotte Burrows stating that more than 80% of employers use this technology.
Employers can use software that incorporates algorithmic decision-making at various stages of the employee hiring process, such as resume scanning, video interviewing, and testing to provide “job fit” scores. AI, in the form of predictive algorithms, is often at the core of this software. Traditionally seen as a better alternative to the implicit bias that can permeate the hiring process when performed by humans, AI was supposed to serve as a better option for neutral hiring practices.
Federal Discrimination Concerns
Through its use, however, deficiencies have arisen with respect to eliminating bias. As McAfee & Taft labor and employment attorney Elizabeth Bowersox discussed in her previous article last May, the EEOC and the U.S. Department of Justice (DOJ) have already issued guidance cautioning employers about the intricacies of using AI in employee hiring and ensuring compliance with the Americans with Disabilities Act (ADA).
Because bias (and, in turn, the potential for discrimination) against applicants due to race, national origin, gender, age, and other protected characteristics may arise based on the data used to create the algorithms that employers then use for hiring decisions, AI’s shortcomings aren’t limited to ADA concerns.
According to the EEOC, the use of AI in the employment context typically means that the developer relies (at least in part) on the computer’s analysis of data to determine the set of criteria used when making employment decisions. But this use of AI can lead to biased results.
Take, for example, technology that is used by employers to review the resumes of previous “successful” employees in order to determine the criteria they will use to make employment decisions.
If the resumes of the previous “successful” employees were largely from a group of individuals sharing the same characteristics (e.g., Caucasian males), the data used to create the hiring criteria may score similar resumes (i.e., those of other Caucasian males) higher than those of individuals of a different gender or race because the resumes from Caucasian males are more likely to contain similar key terms as the resumes of the previous “successful” employees.
Currently, no federal law exists regarding the use of AI in employment decision-making. Certain members of the U.S. House of Representatives, however, did introduce the Algorithmic Accountability Act last year. The Algorithmic Accountability Act aims to bring transparency and oversight to the use of automated systems and includes a requirement that companies conduct impact assessments for bias.
State and City Actions
Further, states and cities have begun passing and implementing laws aimed at the use of AI in employment decision-making.
The Illinois Artificial Intelligence Video Interview Act (AIVIA), which provides a variety of requirements regarding the use of video interview evaluation systems relying upon AI, went into effect on January 1, 2020.
Under the AIVIA, any employer that asks applicants to record video interviews and uses an AI analysis of these videos must comply with disclosure requirements aimed at informing the applicant about the video interview evaluation system. The employer must also obtain the applicant’s consent to be evaluated by the AI program.
The AIVIA likewise limits with whom the employer may share the applicant’s video, and requires the employer to delete an applicant’s interviews within 30 days after receipt of the applicant’s request. In addition, employers must annually report demographic data, including the race and ethnicity of applicants, to the Department of Commerce and Economic Opportunity if the employer relies solely upon an AI analysis of a video interview to determine whether an applicant receives an in-person interview.
As of October 1, 2020, Maryland restricted employers from using facial recognition services for the purpose of creating a facial template during an applicant’s interview unless the applicant consented in a written waiver.
New York City’s Local Law Int. No. 144, which went into effect on January 1, 2023, prohibits an employer or employment agency from using an “automated employment decision tool” to screen a candidate or employee for an employment decision unless the tool underwent a bias audit, and the employer made a summary of such results publicly available on its website prior to using the tool.
Local Law Int. No. 144 also implements notice and candidate opt-out requirements with which employers must comply, and it even imposes civil penalties of $500 to $1,500 for each violation.
Illinois, Maryland, and New York City serve as a few examples of states and cities that have taken steps to restrict the use of AI in employment decision-making, with others, including Washington D.C., looking to follow suit.
Takeaways for Employers
The use of AI in employment decision-making, and specifically the hiring process, has multiple benefits for employers. These tools can streamline aspects of the hiring process for increased efficiency and help remove the implicit bias held by human reviewers.
Employers must be aware, however, of the pitfalls associated with these tools, including issues regarding accessibility and compliance with the ADA and unintended bias stemming from the subset of data used to create the hiring criteria.
As states and cities continue to pass laws aimed at the use of AI in employment decision-making, employers must be aware of these concerns not only to remain in compliance with any applicable legal requirements, but also to ensure they are hiring a diverse, highly qualified workforce.
Alyssa Lankford is an attorney with McAfee & Taft in Oklahoma City, OK. She can be reached at alyssa.lankford@mcafeetaft.com.