Artificial intelligence (AI) has captured our imaginations for decades. Blockbuster hits like The Terminator and The Matrix series have driven home fears of computers becoming sentient and seeking to eradicate or enslave the human race.
Beyond Science Fiction: AI Impacts Become Reality
More recently, however, concerns over AI have been more mundane and practical, and those concerns are typically focused on AI’s impact on human labor. There have long been concerns over whether and to what extent AI will render humans obsolete in different industries, for example.
But AI has the potential not only to replace human labor but also to complement human labor, resulting in significant productivity boosts. AI, including tools like generative AI chatbots, has the ability to answer a wide range of employee questions, create high-quality written and visual content, and even engage with customers.
This has many employees asking themselves: “If my employer can benefit from replacing me with AI, why shouldn’t I use AI to perform my work myself?” In other words, what’s wrong with employees using AI to perform their work for them?
In many, if not most, cases, there shouldn’t be any ethical concern with such an approach. However, it’s important to put certain ground rules in place to protect business interests and ensure appropriate use of these rapidly emerging tools.
Transparency
One of the most prominent concerns over the use of AI is transparency around the authenticity and source of AI-generated content. Was this content created by a human employee over several hours, or was it created by an AI chatbot in 15 seconds? How should that impact how that work is paid for and how much?
Similarly, businesses also need to be transparent with their employees when it comes to how the use of AI may impact their work and, ultimately, their careers.
“HR should strive for transparency and create a culture of openness in how AI algorithms and decision-making processes work,” advises Young Pham, the Cofounder and Senior HR Manager at BizReport. “Employees should be informed as early as possible about the decision to use AI in HR and given a clear explanation of how it affects their current employment and future decisions about their work.”
Protection of IP and Confidentiality
Many AI chatbots involve creating prompts. This might mean a user entering a one-sentence question or entering a lengthy report and asking the chatbot to summarize its content.
But what happens to the information entered into the prompt?
It’s important that users understand the implications of entering customer data, company financial information, or other sensitive material into a third-party AI chatbot.
Certain laws, regulations, or contractual obligations with customers may have rules around how that information is used and whether and how it can be shared with third parties, including providers of AI chatbots.
Ownership
Another key issue in the use of AI chatbots at work has to do with the concept of ownership. We’re not talking about ownership in the sense of legal ownership of intellectual property created by or entered into an AI tool; we’re talking about the concept of taking responsibility for the product of AI-assisted work.
This means that even if AI is a key contributor to a given work product (or even the sole creator), the employee or team using that work product is still responsible for it.
A great example to illustrate this point comes in the legal context. A New York attorney recently found himself in hot water for submitting court filings that cited bogus legal authorities. The source of this infraction was apparently the lawyer’s use of an AI chatbot to create the filing.
When AI is used to help create work products for human employees, those human employees are still responsible for ensuring the end result is accurate and doesn’t plagiarize work from others. The lawyer in the example above could have easily looked up the case references created by his AI tool and discover they didn’t exist. Similarly, there are a number of free plagiarism tools available online.
Those who choose to leverage AI in their work have no excuse for bad output.
The Role of HR
AI and AI chatbots are obviously very tech-focused tools, but it’s not just the IT team that needs to be well-versed in their operation and use. The HR function also has a critical role when it comes to the use of AI by employees and by the employer.
“When new AI technology is introduced, it is important for HR to communicate guidelines around its use in a timely, clear, and consistent manner—managing the people side of the implementation,” says Amani Gharib, PhD, Director of HR Research and Advisory Services at McLean & Company.
“HR can also support through feedback and output review, as well as ensure that policies are relevant and in compliance with local and global laws and regulations,” Gharib adds. “By conducting risk assessments and creating policies around AI technology, HR will be able to manage ethical risks and considerations accordingly. It is also just as important for organizations to build resilience and manage the disruption of AI through effective change management and transformational leadership.”
Early concerns over AI in the workplace focused on AI taking jobs from human workers. While AI can replace a human worker entirely, in practice, there are perhaps more situations in which AI—in its various forms—will augment the work human employees are doing. While this collaboration can create tremendous productivity gains, it also raises a variety of ethical concerns for both employers and employees.
It’s therefore essential for the HR team to have a central role in managing the rules and processes around the use of AI in the workplace and for HR leaders to stay on top of rapidly changing trends related to the use of generative AI and other tools.
Lin Grensing-Pophal is a Contributing Editor at HR Daily Advisor.