HR Technology

Managing artificial intelligence in the workplace

The last several years have seen artificial intelligence (AI) become mainstream in the workplace. Today, HR professionals widely use AI tools for recruiting, onboarding, and administering leave and benefits. Managers use generative AI to assist with their administrative and supervisory responsibilities, such as writing performance reviews. Engineers use AI to write or check code. And business leaders use AI to model future sales trends and plan marketing campaigns. But the widespread use of AI in the workplace comes with legal risks.

Legal Risks of Using AI in the Workplace

Three risks of using AI in the workplace came into clearer focus in 2023.

First, employees who use generative AI tools such as ChatGPT risk violating privacy laws or compromising the company’s competitive advantage by disclosing proprietary or otherwise protected information. For example, last April, employees at Samsung reportedly uploaded sensitive internal source code onto ChatGPT, potentially making that data available to competitors.

Data submitted to generative AI tools is difficult—if not impossible—to recover or remove, and once uploaded, it may be accessible to other users of the same AI tool. Confidential or proprietary data shared with an open-source AI tool may lose its protected status due to the company’s failure to properly safeguard the information. Additionally, the disclosure of protected information (PII or PHI) could violate statutory privacy laws and trigger reporting and disclosure obligations.

Second, although generative AI tools have improved dramatically in the past couple of years, they’re still prone to delivering inaccurate or unreliable information (i.e., “hallucinations”) in certain contexts. In the legal realm, for instance, there have been reports of attorneys filing AI-drafted legal briefs, only to discover later that the cases cited by the tool don’t exist. In response, many courts are prohibiting attorneys from using AI to draft court filings or are requiring them to disclose their use of AI and certify that the resulting content is accurate. Inaccuracies in AI output can potentially create legal liability for any business that doesn’t sufficiently vet the content.

Third, AI tools that incorporate “machine learning” (a form of AI that uses data and algorithms to imitate the way humans learn—think Hulu) run the risk of developing learned bias, which could lead to discrimination claims. In 2017, Amazon reportedly shut down an AI recruiting tool it had built to screen resumes because the tool taught itself that male applicants were preferable, and it downgraded resumes that included reference to women’s activities or all-women’s colleges. The Equal Employment Opportunity Commission (EEOC) is focusing on the discrimination risks AI tools present, and several state legislatures have passed laws regulating the use of AI in hiring.

Legal Response to the Use of AI in the Workplace

2023 saw a wave of legislation aimed at regulating the use of AI in hiring and recruiting. New York City implemented a comprehensive law requiring employers to notify applicants if they use AI and to perform annual “bias audits” to identify any biases in an AI tool’s design or operation that could lead to discrimination. California, Washington, D.C., Massachusetts, New Jersey, and Pennsylvania have all proposed legislation with similar components.

The EEOC announced a new focus on AI in the workplace, issuing guidance about the risk that AI used in employment selection procedures may have an adverse impact on members of protected classes. In September 2023, the EEOC settled a discrimination lawsuit against iTutorGroup, Inc., which allegedly programmed its recruiting software to automatically reject older candidates. iTutorGroup agreed to pay $365,000 to resolve the lawsuit.

Additionally, employees’ attorneys have begun testing the waters of AI-based discrimination claims. In Mobley v. Workday, Inc., filed last year, Derek Mobley (who is African American, over 40, and disabled) allegedly applied for nearly 100 jobs at various companies that all used Workday software to screen applicants. He was denied every job and claimed the Workday software unlawfully screened him out due to his protected classes. Employers that use automated employment decision tools such as Workday could also become targets of litigation.

Strategies for Managing Risk

As the use of AI in the workplace becomes more commonplace, it’s critical to have a strategy for managing risk. First, you should consider implementing a comprehensive AI policy that identifies permissible (and prohibited) uses of AI and any disclosure requirements. The policy may also prohibit the use or disclosure of protected information, require employees to report any misuse of AI in the workplace, and include protocols for ensuring AI output is vetted for accuracy.

Second, you should consider researching your AI tools thoroughly to understand how they protect against learned bias or the disclosure of protected information. It may also be advisable to read the terms of service carefully to understand your rights and obligations if a claim is filed (i.e., does the AI vendor expect to be indemnified by the employer?).

Third, you should monitor proposed legislation and court decisions to ensure you are adjusting your practices to comply with any new requirements. Finally, you may wish to consult legal counsel about your specific uses of AI and how to best minimize legal risks.

Benjamin J. Naylor is an attorney with Snell & Wilmer LLP in Phoenix. You can reach him at bnaylor@swlaw.com.

Leave a Reply

Your email address will not be published. Required fields are marked *