Diversity & Inclusion, Technology

Discriminating Droids: What Employers Should Know About Artificial Intelligence

A growing number of employers are turning to artificial intelligence (AI) to help with selecting the best job candidates. Although it can make the decisions easier by reducing the amount of work required to find a great employee, some commentators are increasingly concerned about the potential for discrimination or disparate outcomes as a result.

How AI Works

Although some AI programs may sound like science fiction, companies are already using them. Here are some examples:

  • Some online systems search through social media profiles for desirable characteristics to identify job candidates.
  • Others use keyword searches of resumes or more complex evaluations to compare and rank the materials candidates submit as part of their application.
  • Rather than conducting screening interviews in person, some companies are using chat bots for the initial screening contact or recording and relying on AI programs to analyze video of a candidate answering interview questions.

Real-World Examples of Discrimination in Automated Systems

It might seem counterintuitive that turning your hiring decisions over to a seemingly neutral and bias-free computer system could lead to discriminatory outcomes, but the programs aren’t perfect. They are developed and trained by humans who may have unconscious biases that the artificial intelligence system “learns” and applies.

For example, during a review of a resume screening tool, one company discovered the program identified two factors to be the most important when deciding whether to recommend a potential candidate: The applicant’s name was Jared, and he played lacrosse in high school. The biases, which were unintentionally added to the program, had the potential to disadvantage women candidates or employees with disabilities.

In 2015, Amazon decided to limit its use of AI in hiring decisions after discovering its algorithm was biased against women. The computer system had used old resumes of past company hires to predict whom it should hire next. The problem: The resumes the retailer used to train the program had come overwhelmingly from male applicants, so the system learned to prefer applications from men.

EEOC’s Response

In response to discrimination concerns raised by AI, the Equal Employment Opportunity Commission (EEOC) launched an initiative in 2021 to review its use in hiring and other employment decisions. The initiative is gathering information about the use of technology in workforce decisions and identifying promising practices, and the agency will work to provide guidance.

EEOC Chair Charlotte Burrow stated, “The EEOC is keenly aware that [AI and algorithmic decision-making tools] may mask and perpetuate bias or create new discriminatory barriers to jobs.” She explained the agency’s initiative seeks to ensure the technology doesn’t become “a high-tech pathway to discrimination.”

Steps for Employers

Do you face risks if you turn to automated systems to make important employment decisions? The short answer is yes, but there are steps you can take to review the choices made by AI programs to protect your organization from costly employment discrimination litigation:

  • If your company uses AI to make employment decisions and doesn’t want to end up with a discrimination lawsuit (or a disproportionate number of lacrosse-playing guys named Jared), it’s important to review and audit the outcomes to make sure you’re complying with federal and state antidiscrimination laws.
  • If your company contracts with vendors using AI, consider discussing your antidiscrimination obligations with them and learning what steps they’re taking to reduce bias in their systems.
  • Stay alert for new guidance from the EEOC task force related to technology and AI.

Sarah Otto is an associate with Foulston Siefkin LLP in Overland Park, Kansas. You can contact her at sotto@foulston.com.

Leave a Reply

Your email address will not be published. Required fields are marked *