Over the past 6 months, the acceleration of artificial intelligence (AI), particularly in the form of large language models (LLMs) like Open AI’s GPT-4, has been relentless. These models have demonstrated capacities that would have seemed like science fiction just a few years ago, such as the ability to immediately produce compelling and fluent text about a vast range of subjects (and in countless formats, from poetry to song lyrics).
GPT-4 has also demonstrated exceptional performance on cognitive tasks, scoring in the 90th percentile on the Uniform Bar Exam and in the 99th percentile on the Biology Olympiad. The commercial applications of such powerful technology are endless, which is why many industries are exploring the deployment of AI, including HR. However, the hazards presented by these AI models are becoming more visible by the day. They often “hallucinate” and confidently produce inaccurate information, they can be biased and even manipulative, and they create serious copyright concerns because they’re trained on a staggering amount of online data.
It’s no surprise that AI is attracting regulatory and legal scrutiny—a trend that will almost certainly pick up momentum in the coming months and years. This is especially true for industries like HR, in which fairness and transparency are core principles. With these facts in mind, let’s take a look at how HR professionals can safely navigate the AI era.
AI Will Be Heavily Regulated
Despite the recent calls for a pause on AI development from experts in the field, technologists, and entrepreneurs, the history of technological innovation demonstrates this is an illusory goal. But given the well-founded fears surrounding AI and the rapid advancement and adoption of the technology, significant regulation is inevitable. This doesn’t just mean new regulations, either—existing laws around issues like copyright and discrimination already apply to AI-powered tools in HR and every other field.
A recent joint statement by the Consumer Financial Protection Bureau (CFPB), the Department of Justice Civil Rights Division, the Equal Employment Opportunity Commission (EEOC), and the Federal Trade Commission (FTC) observes that “Existing legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices.” This statement has clear implications for HR professionals. For example, the EEOC recently issued technical guidance outlining how the Americans with Disabilities Act (ADA) “applies to the use of software, algorithms, and AI to make employment-related decisions.”
The joint statement notes that AI systems have the “potential to produce outcomes that result in unlawful discrimination.” AI is also capable of generating a huge amount of misinformation, which can lead to confusion and frustration among candidates and create legal liabilities. These are urgent reminders that the advantages of AI need to be carefully weighed against the risks.
How HR Teams Can Use AI Responsibly
AI is a powerful tool for hiring managers and other HR professionals, but it has to be used with a full awareness of the dangers it poses. This means deploying AI for certain tasks without relying on it too heavily and ensuring every use case includes rigorous human oversight. There’s no reason for HR teams to avoid AI altogether, as it can improve efficiency, help recruiters deepen talent pools and identify promising candidates, and automate tedious, error-prone manual processes. But these advantages are all the more reason to use AI scrupulously.
According to a recent survey conducted by the Society for Human Resource Management (SHRM), 69% of hiring professionals who use AI say it reduces the time necessary to fill open positions. But 54% cite challenges with the technology, such as an inability to “properly audit or correct AI algorithms and … inadvertently overlooking or excluding qualified applicants or employees.” Considering the fact that the top benefits of AI cited by hiring professionals include the ability to “automatically filter out unqualified applicants” and find better candidates, it’s alarming that the decision-making process is opaque.
Government agencies have made it clear that inscrutable AI processes are no excuse for discrimination or any other violation of the law. This is why recruiters and other HR professionals can’t afford to make AI an integral part of their hiring process without strict controls and transparency that will keep them compliant with all relevant laws and regulations.
Fully and Safely Leveraging AI in Hiring
AI presents a unique opportunity for HR teams. By making the talent search broader and more systematic, AI can help companies consider a more diverse array of candidates and quickly fill open positions. From sifting through large quantities of data on candidates who previously applied to the company (talent rediscovery) to the use of filtering mechanisms that allow hiring managers to make searches more refined and efficient, AI has already proven beneficial for HR teams. Of the hiring managers who use AI, 68% say they have more applications to review, and 59% say the quality of these applications has improved.
That said, if AI creates inequities in the hiring process or provides candidates with erroneous information, this won’t just cost HR teams potential employees—it could also result in lawsuits, fines, and other penalties. Beyond immediate financial damage, this has the potential to impede a company’s operations and inflict severe reputational harm. HR teams can avoid these consequences by ensuring AI is always complemented by human intelligence in the hiring process. This means frequently and thoroughly auditing AI hiring platforms to determine whether they’re fair and predictive, using assessments and structured interviews to draw objective conclusions about candidates, and ensuring a human being is involved in the final decision-making process.
While AI has the potential to be a major asset for recruiters, it can also be an even bigger liability. To ensure their company is maximizing the value of AI while minimizing the risk, HR professionals need to subject it to a consistent and robust review process while aligning it with other effective hiring tools and their own best judgment.
Dr. Matthew Neale is VP of Assessment Products at Criteria.