HR Management & Compliance, Technology

Are Your Workplace Policies Prepared for Generative AI?

As generative artificial intelligence (GAI) technology, such as ChatGPT, finds new and greater uses in the workplace, employers must consider the myriad of legal and other issues that come with it. For good reason, employers increasingly are implementing policies to mitigate potential risks and ensure safe and permissible uses of GAI by employees.

AI workplace policies

In this article, we highlight some of the key risks associated with introducing GAI into the workplace and outline strategies to assist HR teams and organizations with developing policies to mitigate such risks.

What Is GAI, And How Is It Being Used Today?

GAI generally refers to algorithms known as large language models that can be used to create new content—including audio, code, images, text, simulations, and videos—based on a user’s prompt. GAI models ingest massive data sets of text, information, and images from the internet and other sources, which are used to train the models to gradually “learn” and “understand” the relationship between words or data.

When a user inputs a prompt, a GAI model generates new text, images, or data based on the data set on which it was trained. Some popular GAIs include ChatGPT and Bard (for text generation); DALL-E, Stable Diffusion, and Midjourney (for image generation); and Runway and Sythesia (for video generation).

The potential applications and uses for this powerful technology are numerous, and employees are using the tools to generate software code, draft communications (emails, memos, and correspondence), generate ideas and content, outline and summarize lengthy or complex documents, and fact-check existing content. The AI-generated content is then being used in a variety of business operations, including marketing, sales, customer support, and back-office functions.

What Are the Risks Of Employees Using GAI At Work?

Although GAI tools might offer efficiency and shortcuts in generating content, they carry a number of risks you need to consider. For example:

Risk: Inaccuracy/bias. Text-generating GAI tools produce outputs by predicting the most likely next set of words based on the corpus of data used to train the model. While these tools often provide clear, coherent outputs that may be very reliable, there’s always a risk the outputs are inaccurate or misleading. Indeed, developers of the models have acknowledged that sometimes the systems produce “hallucinations”—inaccurate text that is wholly fabricated. Further, the systems are limited by the data used to train them, which itself may be inaccurate, biased, or simply limited in scope. ChatGPT discloses that its “knowledge” is limited by any facts arising after 2021.

Risk: Ethical/moral hazard. GAI systems are relatively untested, and users may not know what, if any, ethical constraints are placed on the GAI, including potential outputs that reinforce or promulgate biases, stereotypes, and prejudices, or ignore social or moral conventions entirely.

Risk: Privacy. Information included in the prompts for GAI systems may be used by the model’s developer to further train, refine, or improve the model. Indeed, ChatGPT’s terms explicitly state its developer may use the prompts for that very purpose. As a result, including any personal information in prompts may violate privacy laws and policies applicable to that organization.

Risk: Trade secret security/protection of confidential business information. Similarly, if any confidential or proprietary business or enterprise information is entered into prompts, that data may be shared with the model developer and loses all security and confidentiality protections.

Risk: Copyright/contract claims. Commercial uses of a third-party GAI tool may subject users to copyright infringement claims, breach of contract claims, or other claims arising out of violations of the developer’s terms of use, or from the duplication of intellectual property that was used in training data.

Risk: Copyright enforcement/IP ownership. There’s a risk that content created by GAI cannot be copyrighted unless it involves significant human input.

Risk: Consumer protection and regulatory compliance. The Federal Trade Commission (FTC) and other federal agencies have asserted that if consumers are unaware they are interacting with an automated process (e.g., bot), rather than a human, that may present potentially unfair or deceptive practices. Further, the federal, state, and local legal and regulatory frameworks continue to evolve concerning use of this technology, and some of these laws require preservation of data, auditing for potential bias, and transparency or explainability duties.

Risk: Defamation. Content created with a GAI tool may be offensive, defamatory, or otherwise violate workplace policies.

Risk: Specialized duties. Certain organizations operating in highly regulated industries may face additional compliance risks arising from such regulations. For example, attorneys considering the use of these tools should consider applicable rules of professional responsibility and avoid overreliance on GAI.

Takeaways for Employers

You should adopt GAI policies that leverage human oversight, training, and monitoring of GAI in the workplace. Instituting new workplace policies to keep up with technology is nothing new—policies on personal electronic devices and social media use are now almost universal. Like with these technologies, you first need to assess what your approach to GAI will be and whether, and to what extent, you will allow employees to use such tools for work.

If so, determine what parameters will apply to internal or external tools. This will depend heavily on the organization’s mission, business, and workforce because different industries carry different risks, as do the different potential uses for GAI (in sales, marketing, human resources, etc.).

After determining how the organization may leverage GAI, you should make your guidelines clear to employees in a written policy. As with existing technology usage policies, a GAI policy should define GAI, explain its risks, and set out clear guardrails on permissible or prohibited use.

The policies should include terms to ensure the organization takes the following steps:

  • Identify and inventory all current and potential GAI tool uses in the organization. Your inventory should be refreshed periodically, possibly as frequently as quarterly or semi-annually.
  • Assess the risks of the current and planned uses of GAI tools. Some applications may present little risk and thus require little oversight, while other applications (including some of those listed above) may need to be closely monitored or even prohibited. For example, it’s certainly advisable to prohibit employees from publishing material generated by GAI without any sort of human review, or from inputting confidential data and trade secrets into a GAI tool that will send the data outside the organization. Maintain a record of the current uses, especially those deemed to be high-risk.
  • Clearly identify permissible and impermissible applications and use cases. You should require employees to clarify with management whether they can use certain tools, and consider maintaining a list of permitted or prohibited tools and uses.
  • Adopt transparency protocols to ensure employees and external recipients of GAI outputs understand what content was created with GAI tools. Consider whether additional protocols or tags should be used for internal purposes to clearly designate high- , medium- , and low- risk outputs.
  • Train managers and employees on the risks of GAI tools and the organization’s internal policy parameters around the use of such tools.
  • Continually monitor emerging applications/uses and compliance with the policy. Doing so is critical because the technology is rapidly evolving and being deployed in a number of novel ways.

Further, you should continually assess (and reassess) what laws or regulations might apply to your employees’ use of GAI tools and how your policy could shape compliance. New legal and regulatory frameworks are emerging across numerous jurisdictions, which merits special attention to ensure compliance in this area.

K.C. Halm, Jeffrey S. Bosley, Matt Jedreski, Erik Mass, and Brent Hamilton are attorneys with Davis Wright Tremaine LLP in Washington, D.C.; San Francisco; Seattle; New York City; and Portland, Oregon, respectively.

Leave a Reply

Your email address will not be published. Required fields are marked *