Artificial intelligence (AI) is a buzzword on everyone’s lips, and to quote a hit song from the 1980s, apparently, “The future’s so bright, I gotta wear shades.” In other words, many are quick to sing the praises of AI and the opportunities that lie ahead. However, as AI gains more headlines and becomes more ubiquitous, some, including employers, are becoming more aware of the issues and risks attached to this new technology.
Promise and Peril
Last fall, the White House issued its “Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence,” which recognizes the “extraordinary potential for both promise and peril” in AI technology but cites several critical expectations for AI governance:
- It must be safe and secure, not just in terms of generalized cybersecurity risk but also to determine that the system itself is “ethically developed and operated in a secure manner” and is “compliant with applicable federal laws and policies.”
- It must promote innovation, competition, and collaboration.
- It must require companies to use AI to support “American workers,” charging governing agencies to “adapt job training and education to support a diverse workforce and help provide access to opportunities that AI creates.”
- All AI policies should be consistent with “advancing equity and civil rights,” which would include “robust technical evaluations, careful oversight, engagement with affected communities, and rigorous regulation.”
Of additional concern is the ongoing protection of privacy and civil liberties, with the Department of Labor (DOL) subsequently issuing general principles that include the following:
- Workers and representatives must have input into design development and other components of AI use.
- All development should be handled in a way that protects workers.
- There must be clear government systems and procedures that include human oversight.
- Employers must provide information to workers and jobseekers when AI is used in the workplace, must promote workers, and must implement systems that “assist, complement, and enable workers and [do] not violate workers’ rights.” This includes training and/or “upskilling workers” for positions related to AI.
- Data collected, used, or created by AI systems must be handled appropriately and be used to support legitimate business.
This spring, the Equal Employment Opportunity Commission (EEOC) reiterated its commitment to having a role in AI development, as well as the enforcement of regulations relating to AI. As part of the Algorithmic Fairness Initiative, a multiagency group addressing AI issues, various facts and resources are available, including through EEOC.gov.
Bottom Line
What does this mean for employers from a practical standpoint, especially in an environment where issues change daily, agency initiatives are consistently challenged, and the initiatives set goals with no framework for addressing them? Like the Magic 8 Ball, all outlooks are hazy, but we can expect to continue to see various government agency initiatives:
- The EEOC will continue to focus on AI use in interviews and hiring decisions. This has been a long-term focus and has looked at age, sex, and other forms of bias.
- The Federal Trade Commission (FTC) has ramped up online enforcement, including under its healthcare data rules.
- The National Labor Relations Board (NLRB) is likely to focus heavily on job loss, as well as employee privacy, especially as it relates to work conditions and organizing.
However, we remain focused on aspirational goals rather than concrete requirements.
Jo Ellen Whitney is an attorney with Dentons Davis Brown in Des Moines. You can reach her at joellen.whitney@dentons.com.