HR Technology

EU Passes Artificial Intelligence Act

Congress and the Biden administration continue to focus on regulating artificial intelligence (AI). Despite that attention, however, no comprehensive legislation or regulations applying to AI have been implemented at the federal level. In contrast, the European Parliament passed the Artificial Intelligence Act on March 13. The legislation was three years in the making and will have wide-ranging consequences for employers using AI within the European Union and globally. Employers should also be paying attention to this new Act because regulators in the U.S. are likely to see it as a model for future legislation at the state and federal level.

Highlights of the New Legislation

The AI Act is expected to officially become law in the European Union before the summer after clearing some additional administrative hurdles, with the Act’s compliance obligations going into effect in phases over a 36-month period. U.S. employers with a presence abroad that use artificial intelligence for hiring or workforce management in the EU will be covered.

Under the Act, use of AI will be regulated according to risk level, with the AI systems falling into three risk categories: (1) low risk; (2) high risk; and (3) unacceptable risk—i.e. the use of the technology is prohibited. At each risk tier the compliance obligations are heightened. Significantly for employers, high risk systems include those “used for the recruitment or selection of natural persons . . . [or] to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behavior or personal traits or characteristics or to monitor and evaluate the performance and behavior of persons in such relationships” (see EU AI Act, Annex III(4)(a)-(b) (emphasis in the original)). Additionally, AI systems that assess the emotions of employees fall under the unacceptable risk category unless deployed for health and safety purposes.

The majority of the AI Act’s most onerous requirements fall on the developers (called “providers” under the Act’s regulations) of the systems. Employers are most likely to be characterized as “deployers” under the Act because they will principally be users of AI systems rather creators of the tools. As deployers, employers have a number of obligations, including:

  • Ensuring the instructions for use of the AI system are followed;
  • Maintaining usage logs;
  • Assigning a dedicated oversight team with adequate knowledge and authority to serve in the role;
  • Providing notice to workers and their representatives that the AI system will be used before it is deployed;
  • Informing developers if they determine the AI system presents a risk to the health, safety, or fundamental rights of individuals; and
  • To carry out data protection impact assessments in accordance with the EU’s General Data Protection Regulation (GDPR) in some circumstances, among other obligations.

Takeaway

Complying with the AI Act’s obligations will be paramount for covered employers because penalties under the law range from between 1% to 7% of global revenue or €7.5 million to €35 million. The likely result of the new law is that employers with a European workforce will comply with the Act’s requirements globally for ease of operations (the so called “Brussels Effect”). Moreover, even those U.S. employers without a European workforce will need to monitor the progress of this law. The notice obligations, recordkeeping requirements, and impact assessment requirements are likely to feature in the U.S.’s eventual laws and regulations in this area.

Savanna L. Shuntich is an attorney with FortneyScott in Washington, D.C. You can reach her at sshuntich@fortneyscott.com

Leave a Reply

Your email address will not be published. Required fields are marked *