EntertainHR

CHERYL and Employer Liability for Cybernetic Harassment

Ladies and gentlemen, you are experiencing history in the making. For the first time in “EntertainHR” history, the same author is doing a post on the same episode of the same show.  However, there is a slightly different issue here. 

harassment

Source: charles taylor / Shutterstock

You may recall my very first “EntertainHR” post—July 3, 2018. I discussed sexual harassment investigation issues in Season 4, Episode 1 (entitled “Kimmy Is … Little Girl, Big City!”) of “Unbreakable Kimmy Schmidt.” While I was watching that episode almost 2 years ago, I was amused by CHERYL, the Cybernetic Human Empathy Response Yuko/Lamp.

CHERYL, as you may not have gleaned from the previous sentence, is a robot/artificial intelligence (AI). CHERYL was introduced to viewers in a conversation with Kimmy in the break room. Kimmy vented to CHERYL that it was so nice to “have another gal in the office.” Except for Kimmy (and CHERYL?), the office is all male.

Kimmy then expressed frustration with the not-so-fun parts of her job in HR, such as terminating an employee. CHERYL then responded, “Do you want my advice? Chardonnay.” OK, that response was unexpected, but we’ll let that one slide. Titus steals Kimmy’s attention, while CHERYL listens on.

CHERYL reappears in Episode 2, and she gets even more risqué. She greets Kimmy with “It’s Friday. Who are/what are we getting into tonight?” Kimmy then proceeds to tell CHERYL that she doesn’t have plans, and CHERYL recommends some things I can and some things I cannot repeat here; Kimmy eventually wonders (to herself) whether CHERYL has a drinking problem.

Can AI Harass Humans?

I think we all can agree that CHERYL’s behavior was objectionable, and if it were perpetrated by a human, it could lead to another sexual harassment complaint for Kimmy to investigate. But, can “harassing” actions by AI subject an employer to liability?

First, is there such a thing as a Cybernetic Human Empathy Response Yuko/Lamp? Well, not exactly, but social robots—AI that is designed to interact with humans and other robots, including hitchBOT, Kismet, Tico, Bandit, and Jibo—do exist. Furthermore, some of these robots assume functions traditionally performed by humans. They can even be equipped with their own personalities.

Assessing liability for personal injury perpetrated by robots has centered on treating them as manufactured products. Furthermore, there are multiple areas of the law, particularly the law of agency, where liability is imputed to the master of an autonomous being, such as employee to employer or owner to domesticated animal. But as robots become increasingly autonomous (beware Skynet!), questions will arise as to whether they have rights or can be sued themselves.

Do Robots Have Rights?

In 2017, the European Union commissioned a draft report on “robot personhood” in which suggestions included creating a robot registry, including the identities of owners and controllers of robots, and creating a code of ethics for robot manufacturers that reflects the European Union’s Charter of Fundamental Human Rights, including antidiscrimination. The report’s suggestions appear to be aimed at making robot manufacturers responsible for their creations.

Circling back to CHERYL, because robots are programmed by a person and thus not completely autonomous, there is a natural person to blame, so it is likely that person (the programmer/manufacturer, assuming Giztoob did not manufacture or program CHERYL) could be liable for CHERYL’s harassment.

On another note, AI developers are reportedly developing technology that can identify and investigate occurrences of sexual harassment. These #metoobots monitor and flag communications among colleagues.

The Guardian reports that “[t]he bot uses an algorithm trained to identify potential bullying, including sexual harassment, in company documents, emails and chat. Data is analyzed for various indicators that determine how likely it is to be a problem, with anything the AI reads as being potentially problematic then sent to a lawyer or HR manager to investigate.”

Furthermore, a publicly available, free chatbot called “Spot” records typed responses to a series of questions. The responses could then be compiled to assemble a report of harassment. A cofounder of Spot (and memory expert) opined that this bot is better at capturing the details of an alleged report of harassment than a human because “it doesn’t come with preconceived notions and can automatically start from a neutral point.”

Regardless, AI in the workplace is here to stay, and although employer liability for harassment by CHERYL is probably not as much of a concern at the moment, if an employee happens to report such concerns, it is not advisable to ignore it.

Leave a Reply

Your email address will not be published. Required fields are marked *