Artificial intelligence (AI) has a number of applications in many industries. While some fear that AI has the potential to one day cost them their jobs by either making them obsolete or by literally taking over the tasks of the job, many people are unaware that AI may play a key role in getting them a job.
That’s because HR departments around the world have increasingly turned to AI technologies in the recruitment process. The use of AI tools to, for example, screen applicant résumés can cut down on mundane manual activities by humans. And, because AI isn’t human, using these technologies can cut down on bias in the hiring process. Right? Maybe not.
Biased Machine Intelligence
Andrew R. McIlvaine explains the theory in favor of AI’s potential for reducing bias in hiring: “[b]y using algorithms to identify people who are highly qualified for a certain job and whose social media activity suggests they’d be open to a new opportunity, companies could avoid the pitfalls of biased recruiters and hiring managers who might balk at bringing on someone from a different background, race or gender.”
However, it’s important to keep in mind that AI is not—at least not yet—
sentient enough to come up with its own criteria for what identifies a qualified candidate. The inputs are driven by humans, and those humans can very easily insert their own biases into the AI, knowingly or unknowingly.
McIlvaine cites one example of an algorithm that looked positively on applicants that, based on social media activity, engaged in activities like lacrosse, tennis, and reading Harry Potter books. These might seem like innocuous criteria; however, they could easily favor certain racial groups or genders over others.
Staying Alert to Bias
AI may well be a useful tool in making the hiring process more efficient, but it’s essential to make sure this is done with an eye to avoiding potential bias. As McIlvaine notes, one of the best ways to do this is through an iterative process, where HR professionals regularly evaluate the results of their AI screenings for signs of disproportionate results favoring or disfavoring different groups.
For example, what proportion of your new hires represent at-risk groups—e.g., women, people of color, older workers. What proportion of your promotion decisions reflect the same?
Looking critically at the numbers can offer insights into where bias may be occurring, providing an opportunity to reconsider the algorithms driving decisions. Clearly there is still a role for humans to train AI—and help it become bias-free.