We recently covered how artificial intelligence (AI) invites bias and what one developer is doing to stop it. The ethical issues AI raises, however, go far beyond bias-infused systems. A recent survey sought to understand the extent to which organizations that use AI systems take ethics of any kind into consideration. The results were less than promising.
The survey was conducted by GLG and followed the responses of C-suite executives. Not only did the survey find that 60% of executives do not feel they have an aligned approach to how AI is used at work, but only one-quarter said they have taken steps to prevent AI-related bias. Another third said they have not taken any steps to implement ethical practices surrounding AI.
Ethics and Bias
Some of the ethical concerns surrounding AI include supporting or extending racial, gender, and even age-based biases; the loss of human jobs; and data privacy. Such issues have made headlines for years now. In fact, the survey showed that 90% of executives in health care, consulting, and financial services believe that AI will be transformative at their organizations in the next 5 years. Virtually all respondents (98%) indicated they will continue or expand their AI development. Despite that, organizations have failed to take sufficient steps to address them The survey found:
- Only 26% said they put measures into place to mitigate potential AI bias;
- Only 25% said they disclose the data AI collects and what is done with those data;
- Only 16% said their organizations have a dedicated committee to oversee AI use; and
- Only 13% said they identify intelligent agents or chatbots as nonhuman entities.
A Matter of Perspective
Different individuals have different ideas about how AI impacts the workforce. Perhaps doomsayers have been doomsaying about AI for too long and the shock has worn off. Or, perhaps it’s the fact that AI is often indivisible or affects various industries and organizations in vastly different ways. Whatever the case, the researchers captured a few different perspectives:
- CHRO, financial services: “Among fintech competitors, [AI] is essential to stay in the game in the future, as customers are demanding it and business processes can’t keep up without it.”
- CMO, consulting: “Employees are (rightly) worried about being replaced by AI solutions. They should be worried, frankly, because if I can replace a $125K resource with a piece of software, I will. And so will everyone else.”
- CHRO, consulting: “A frequent complaint is that we took the ‘human’ out of Human Resources; however, once employees/managers experience the benefits of AI, we have noticed a shift in perceptions.”
- CFO, health care: “AI in my world tends to lead to outcomes that just don’t make sense. And then they get defended to the death. I just want a spam filter that works. Even that is a stretch. How can we automate a decision on a $20 million loan?”
While this research only reflects the views of 167 C-suite respondents, it does provide an important snapshot of how organizations are approaching the very serious ethical and bias issues that AI systems have created.