As artificial intelligence (AI) technologies grow in sophistication, businesses are increasingly turning to them to streamline processes, increase efficiency, and improve productivity. HR professionals are under pressure to integrate AI tools into their recruitment, training, and performance evaluation processes to keep pace with this rapidly evolving landscape.
While AI promises many benefits, there are also potential pitfalls to be avoided. For example, consider the risk that AI-powered recruitment tools may magnify existing biases in the hiring process or that AI-powered performance evaluation tools may not account for important contextual factors that impact employee performance.
To address these issues, HR professionals must be mindful about the use of AI in their processes, getting certain basics right first, such as:
- Establishing transparency in decision-making
- Clarifying the organization’s antidiscrimination and inclusivity objectives
- Ensuring the company is already avoiding bias and discrimination
- Solidifying a fair process for assessing people with disabilities
The Importance of Transparency
When incorporating AI technologies into HR processes, it’s essential to maintain transparency, and a critical aspect of this is ensuring AI-generated selection decisions are based on reliable, valid, and fair assessment results.
Reliability refers to the consistency of test results over time and across different testing conditions. Validity, on the other hand, refers to the extent to which a test measures what it’s intended to measure. In the context of HR, this means that any assessment or test used to inform AI-generated selection decisions must be both reliable and valid.
Transparency can also mean being open about how the AI-generated selection decisions are made, including the factors that are taken into account and the weight given to each factor. This may not be straightforward when using a system based on machine learning, but you should avoid treating an AI platform as a “black box” as much as possible.
Incorporating transparency into the AI-generated selection process can also help promote fairness. This means ensuring candidates understand how their performance is being evaluated and providing them with feedback on the assessment criteria.
Clarifying Inclusivity and Antidiscrimination Objectives
Before deploying AI tools in HR processes, it’s crucial for organizations to ensure their inclusivity objectives are clear. This means scrutinizing their values and goals and how they align with HR’s use of AI. By doing so, HR professionals can identify potential areas of bias that may arise from the use of AI and take steps to mitigate these risks.
Also, be sure to confirm the AI tools you deploy are suitable for use with all employees, regardless of their background, characteristics, or experiences. This may involve adjusting the design of the AI tools or selecting different tools altogether to ensure they don’t unfairly discriminate against any particular group.
Finally, be sure the organization’s inclusivity objectives are communicated clearly to all stakeholders. This means providing training and support to employees and managers on the use of AI tools in HR processes and emphasizing the importance of fairness and inclusivity. By doing so, organizations can ensure HR’s use of AI is aligned with their broader goals and values and doesn’t negatively impact their workforce or culture.
Make Sure AI Isn’t Simply Expanding on Human Biases
AI tools take a function that’s traditionally been done by human beings and do it quicker and with greater scale. However, if that function is faulty at its core, those faults may be magnified in both speed and scale. It’s therefore crucial to ensure AI tools don’t simply build on preexisting biases.
One way to do this is to make sure the organization’s antidiscrimination policies are up to date and followed and the data used to train AI models is representative of the organization’s workforce. Account for factors such as age, gender, ethnicity, and education level, among others. HR professionals can work with data scientists to ensure the data used in the AI models is diverse and representative of the entire workforce rather than skewed toward any particular group.
Also, be sure the models used in the AI systems are transparent and explainable, not black boxes, and that they can explain how they arrive at their decisions so you can identify potential biases in the process.
Finally, don’t go on autopilot. Be sure the AI tools being used are continuously monitored for potential bias, and regularly analyze the data and decision-making processes to identify any patterns or trends that may be indicative of bias. You can then work with data scientists to adjust the AI models to mitigate these risks.
Fairly Assessing People with Disabilities
It can be challenging for HR professionals to ensure their organization is fairly assessing people with disabilities, but there are several key considerations to keep in mind to ensure these individuals aren’t unfairly disadvantaged.
First, make sure the assessment process is accessible to individuals with disabilities, which may mean providing accommodations such as assistive technology, modified assessments, or extended time to ensure the assessment accurately reflects an individual’s abilities. HR professionals may need to consult with disability experts or accessibility consultants to ensure the accommodations are appropriate and effective.
Second, make sure the assessment process is flexible enough to accommodate the needs of those with disabilities. This means considering factors such as the type of disability, the severity of the disability, and an individual’s needs and preferences when designing the assessment process. Are there options for those with visual, auditory, or other impairments?
Third, train the individuals conducting the assessments and, ideally, those designing the system on disability awareness and inclusion. This means providing training on disability rights and accommodations, as well as training to ensure individuals with disabilities are treated with respect and dignity throughout the process.
In summary, HR professionals play a critical role in ensuring AI tools don’t simply build on the biases of human beings. By clearly establishing and adhering to inclusivity objectives, continuously monitoring for potential bias, and ensuring the assessment process is equitable for those with disabilities, HR professionals can position their companies to reap the benefits of AI technologies while avoiding their pitfalls.
John Hackston is a chartered psychologist and head of thought leadership at The Myers-Briggs Company, where he leads the company’s Oxford-based research team. He’s a frequent commentator on the effects of personality type on work and life and has authored numerous studies, published papers in peer-reviewed journals, presented at conferences for organizations such as The British Association for Psychological Type, and written on various type-related subjects in top outlets such as Harvard Business Review.