HR Technology

AI in the Workplace: Crafting Policies for Employees’ Use of Generative AI

Artificial intelligence (AI) is becoming increasingly prevalent in workplaces, providing new opportunities and new challenges for employers and employees. While AI has the potential to improve efficiency and productivity, its use also raises important questions around issues like privacy, discrimination, and job displacement. Employers that choose to implement AI should consider including a provision in their employee handbook, or a separate policy, specifically addressing its use. Such a provision or policy can help mitigate risks, provide clarity for employees, and demonstrate a commitment to using AI ethically and responsibly.

Develop AI Policies

If you choose to incorporate AI into your workforce, you should develop AI policies, regularly update them as laws and technology change, and enforce them. Consider the following provisions in your AI use policies.

Specify which employees may use AI, and require prior approval. You may be willing to let some teams or groups use generative AI technology but not others, especially while you’re still examining how AI can be incorporated into your company or industry. Your AI policy should specify which departments, if any, are permitted to use AI.

Determine which tasks may be performed using AI. Similarly, you should define which tasks can be performed using AI. For example, you may approve your HR team’s use of AI for screening initial applicants—which presents its own host of issues, including bias—but prohibit them from using AI to develop employment contracts or craft termination letters.

Make employees responsible for outcomes. To ensure accountability, every AI use policy should explain that employees are ultimately responsible for the final product or output created or inspired by AI. This means they should fact-check output, including confirming bias hasn’t been introduced.

Prohibit submission of trade secrets and other confidential information. One of the biggest risks associated with generative AI is the possible loss of patent or trade secret rights—or breach of nondisclosure agreements with other entities—through the submission of sensitive or confidential information. For example, under U.S. patent law, the public disclosure of inventive information may invalidate potential patent rights.

Submitting sensitive information to generative AI, without the proper protective measures, may also be considered a public disclosure that waives protections for trade secrets or other confidential information. Also, information submitted to an AI tool may be used in unintended ways, such as to train the AI model. For these reasons, you should clearly define “confidential information” and/or “trade secrets” and prohibit their submission to AI tools.

Consider requiring use logs and other reporting. You can promote transparency and accountability by encouraging or requiring clear documentation of when and how employees use AI tools. Reporting or logging requirements can be flexible and tailored to each business. Consider when, how, and how often employees should document their AI use, including whether it should include input, output, or both.

Oversight is essential. Designate an individual or a department in your business to oversee the use of AI. Employees should direct all inquiries about AI use—and make any necessary reports—to this individual or department. This individual or department should also be tasked with updating the company’s AI policy and staying abreast of relevant legal or regulatory requirements.

Train employees on permissible AI use, and enforce your policy. Of course, a written policy is only as good as the training provided and the enforcement of that policy. Regular training, especially in this evolving area, will be crucial for employees to understand the limits of permissible AI use while still promoting creativity and efficiency. Consistent, nondiscriminatory policy enforcement will demonstrate your company’s commitment to ethical and transparent AI use.

Takeaways

These suggestions are only some of what you should consider when developing an AI use policy. You should also gather input from relevant stakeholders in the organization and seek legal counsel when designing, implementing, and enforcing a policy.

Dana Dobbins is an associate with Holland & Hart, LLP’s labor and employment practice group. She practices out of the firm’s Denver, Colorado, office and may be reached at dmdobbins@hollandhart.com.

Leave a Reply

Your email address will not be published. Required fields are marked *