HR Management & Compliance, Recruiting, Technology

Q&A: AI Is Only as Ethical as the Humans Who Implement It

Artificial intelligence (AI) is already being used across every aspect of organizations, and its use will only increase as the years pass. One question I rarely hear in the face of these advancements is: What are the ethics of using AI?

Source: 3Dsculptor / shutterstock

I recently spoke about ethical ramifications of AI with Aaron Crews, Chief Data Analytics Officer at Littler, as it relates to a recent report his company released.

HR Daily Advisor: According to your recent report, 25% of respondents say they use AI to screen résumés or applications. How can employers be sure that their AI actually helps find the best candidates?

Crews: The reality here is that the proof is in the pudding. AI-based recruiting systems are generally trying to match skills—be they specifically stated or inferred from an applicant’s job history—with the requirements set forth in a job description. Many of these systems then prioritize or “force rank” applicants based on scores, which are generated by an assessment of the fit between skill and experience and the requirements for each candidate. If users of these systems still have to wade through 15, 20, or more candidates in order to find the “right” candidate, then the system probably isn’t working very well. However, if the “right” candidate for a position is found in the first handful of candidates, then it is likely working fairly well from a “fit” standpoint. Repeated use of an AI-based tool that consistently returns quality candidates in the upper echelon of options is pretty good evidence of value.

HR Daily Advisor: How do organizations approach using such AI ethically?

Crews: The biggest risks in the use of these types of tools are that they a) aren’t well trained and therefore don’t really work and b) have been trained on data that lead to impermissible bias in the selection process. Organizations that want to use these tools in an ethical manner can take a few steps to ensure that occurs. First, you can pilot the tool to see that it, in fact, returns candidates you could or would hire for a given position. That tells you whether the system under consideration is functionally valuable.

Second, before deploying these types of tools, organizations can spend time thinking hard about the job descriptions they are going to use. They can focus on making sure that the requirements for a given job really are necessary. For instance, if you are recruiting janitors, would a college degree really be a requirement for that job? The immediate answer for most people is “no,” and we normally wouldn’t include a college degree as a requirement for that position. However, people often think less about this kind of issue when it comes to entry-level, white-collar jobs. If an employee is going to be doing mostly data entry, does he or she really need a degree? Probably not, but you might be surprised at the number of data entry jobs that list a degree as a necessary qualification. Creating job descriptions with an eye toward what is truly necessary is a recommended practice in this space.

Third, organizations deploying these tools can focus on preventing adverse impacts and impermissible bias from creeping into the algorithm. Generally speaking, this means there should be regular testing of the tool’s output for adverse impacts. Where impermissible bias is detected, the cause of that bias needs to be identified, and corrective measures—known as de-biasing—need to be taken. If the model can’t be de-biased, it should be dropped.

HR Daily Advisor: AI is often lauded as being indiscriminate. However, it would be really easy for someone selecting for certain skills or backgrounds to accidentally discriminate against a protected class. How can organizations make sure that they are not discriminating and not opening themselves up to legal claims?

Crews: The case law around the use of these tools is currently unsettled. That is because our antidiscrimination laws and the related case law come out of a world based on a theory of causation. AI-based tools all operate off of correlations, which sometimes don’t make a lot of sense to the human mind. As a result, there is some level of risk in using these types of tools, which is simply endemic to them (currently). However, organizations can decrease the risk associated with AI in HR by routinely testing outputs for bias. This is generally called “adverse impact testing,” and it is a recommended practice with respect to employee selection tools in general, irrespective of whether they are powered by algorithms.

In addition, organizations can choose technologies that are transparent and explainable as opposed to “black box.” This means we can go back and understand the process by which the machine made a particular recommendation. That way, if there is a claim that a particular decision was discriminatory in a manner prohibited by law, the organization can defend itself by showing process and outcome. Coupled with routine adverse impact testing, the use of well-crafted job descriptions, and good logging (aka recordkeeping), transparent and explainable technology will go a long way toward lowering risk.

HR Daily Advisor: Getting to know a candidate is still a critical component of recruiting. How does AI serve that approach?

Crews: Realistically, AI-based tools help in the “getting to know the candidate” space by cutting the number of applicants recruiters and hiring managers need to connect with and talk to in order to fill a position. If AI-based tools make the process of cutting a large applicant pool into a small pool of candidates who are potentially really good fits for a given position, recruiters and hiring managers can spend more time getting to know those candidates instead of sorting through piles of résumés to identify them in the first place.

HR Daily Advisor: Let’s say all of the problems with AI that I’ve raised are addressed: How can such a tool help employers?

Crews: These tools allow HR professionals, particularly recruiters and hiring managers, to scale themselves. Essentially, they act as “force multipliers,” allowing those charged with locating and hiring great employees to do more with the hours and budgets they have.

Leave a Reply

Your email address will not be published. Required fields are marked *