Technology

Reducing Bias in AI Systems

Artificial intelligence (AI) tools have already entered the marketing space, and the HR space is no different. AI can sometimes be considered a bias-free tool. After all, computers deal with pure information. The problem is that these systems are loaded with human information, which is, of course, rife with often unintentional bias.

Source: vs148 / shutterstock

I wanted to apply the problem to one specific AI tool to explore how bias leaks in, how it’s identified, and ultimately how it’s corrected. The AI in question is Sofia, which means wisdom. It’s the personal benefits assistant created by businessolver. I recently spoke with the company’s Head of Applied Data Science Team and creator of Sofia, Sony SungChu, about AI in general, the limitations and powers of Sofia, and how his team deals with bias.

A Nascent Space

SungChu believes that the AI space is just getting started, and that offers quite a few challenges to overcome. For one, people like SungChu have to guide AI’s learning from scratch. That requires creating stringent conditions and boundaries within which AI can learn. Additionally, especially within the space of HR, they have to safeguard sensitive topics (private health information, sexual harassment records, etc.).

That’s also where bias enters the conversation. If hiring AI, for example, decides, for some robot reason, to only hire people who come from a certain socioeconomic class, that company could be wide open for lawsuits, in addition to alienating quality talent and losing out on a diversity of thought. “So as a community, we’re still trying to strive to improve those safeguards,” says SungChu.

Teaching AI Is Like Teaching a Child

SungChu compared teaching AI to teaching a child. After all, the intelligent software learns by making connections based on being fed copious amounts of data. He says, “When you’re teaching a human child, you expose them to different ideas. Let them experience the world a bit; they intake ideas either with your guidance or without your guidance, right? And then they synthesize that information so that they can get cues to think.”

Teaching AI was very similar, and the problem of bias also became immediately clear. In Sofia’s prototype stage, SungChu’s team fed it lists of names. Unintentionally, that list was mostly Western names. He explains that “it really struggled” when they started adding “more Eastern names like Anush or Asoni.” Half the challenge is being unbiased with the data you feed the AI.

The other half of the challenge has to do with AI’s severe limitations.

“We tend to think of AI as being smart, but in the grand scheme of things, they’re really dumb.” He uses the example of how when a child learns what a cat is, he or she can look at any representation of a cat and still know it’s a cat. But if you tell AI what a cat is and then show it a picture of Garfield or Tom from Tom and Jerry, it will likely get it wrong. It is far behind human intelligence at this stage.

What You Feed AI Is Really Important for Avoiding Bias

As we mentioned, initial bias often comes from the data. To counter this, SungChu placed boundary conditions on the AI’s data sets. He explains it as “making sure that the data is as clean as it possibly can be, unbiased, and completely diversified.”

Bias not only exists in the data that are fed in but also can be introduced by having too few people feed data into the AI. In HR, we talk about diversity of thought: A team of diverse individuals will bring more innovation through the diversity of their collective thoughts.

SungChu ran into an issue early on when he was the sole provider of information when he was teaching Sofia. He says, “I showed her the information; I did the pre-processing and all of that stuff. I quickly found that it’s not the right approach because you’re one person, and you’re not looking to disconfirm to your own beliefs by looking at different perspectives. You’re kind of affixed, or you stand on your own beliefs. So it’s very difficult to look from a third-party or from a third-person perspective at the situation.”

The results indicated to SungChu that a different approach was required. He asked himself, “Can you keep out the bias by crowdsourcing evaluations of Sofia’s data sets? We found a bigger team and had more people look at it, and we had customer service representatives look at it; we had key companies that partner with us look at this. We began to get a lot of better feedback as to what are some of the areas that we could potentially improve on for Sofia.”

If innovators like SungChu don’t take these important steps, it’s easy to see how bias can enter an AI system, even one that those same innovators claim has no biases. If you are purchasing or using AI systems as part of your HR systems, it behooves you to inquire about bias with the creators.

Finding Mistakes

When an airplane crashes, investigators look for the “black box.” SungChu maintains that most AI doesn’t have a black box. When many AI programs make a decision, it’s a complex system; when “you feed something into it, you get something out. It’s not really well understood; there’s no knowledge representation with the model that says hey, this is how this AI made this decision.”

SungChu felt it was important that his team understand how Sofia made her decisions, so she was designed to have a black box of sorts. He was able to accomplish this. The solution has to do with some of the limits they impose on the AI and the relationship between the inputs and the results. They can use that information to track why the AI responded how it did, which can help them narrow down mistakes.

What Happens When AI Makes a Mistake

When AI does make a mistake, it can be devastating. In 2016, Microsoft unveiled a new AI chatbot that, within 1 day, went from being fun to advocating the destruction of feminists and Jewish people. Another AI-controlled robot was being showcased at SXSW when she agreed that she would, in fact, destroy humans.

However, we should remember that if AI is like a child, it’s also bound to make mistakes like a child. When AI like Sofia is customer-facing, that can be awkward. SungChu thinks it’s important to adopt an understanding approach to those awkward moments. “So we’re very transparent with our users, and we try to explain as much as we can about how AI works. Sometimes, it’s easy to explain, and other times, it’s not so easy to explain.”

In the instance when AI might express biases, “We do try to make the time it takes for us to detect some of the bias as short as possible. Hopefully within a day or so of detecting the bias, we can correct the situation. So we’re constantly monitoring all of the communication between our base users and Sofia.”

Leave a Reply

Your email address will not be published. Required fields are marked *