New Proposed EU AI Regulation Extends Beyond Europe

April 26, 2021

Reading Time : 5 min

On April 21, 2021, the European Commission (Commission) published its draft Regulation on Artificial Intelligence (AI). It follows the strategies outlined in the February 2020 Commission’s White Paper on AI. The draft Regulation is of key importance to users and providers of AI, not only because it is the first attempt at comprehensive regulation of AI, but also because it may well become the global standard. It is sector-agnostic, has an extraterritorial reach, carries steep non-compliance penalties and applies to multiple stakeholders across the AI value chain. The impact on business will take time to be felt, but there is already concern as to the regulatory burden. We set out below headline points that businesses should be aware of, and we will expand on these in a forthcoming webinar.

Risk-Based Approach

The draft Regulation identifies four levels of risk, and it will be for businesses to identify what AI systems they use or provide, and which risk group each falls into.

Applications at the highest “unacceptable risk” level are banned. Prohibited applications include those that may materially distort the behaviour of a person in a manner that causes (or is likely to cause) physical or psychological harm, and AI systems used by public authorities to evaluate and classify the trustworthiness of natural persons resulting in the detrimental or unfavourable treatment of those persons.

“High risk” applications are not prohibited but are subject to a raft of new restrictions. Such applications include those relating to certain machinery, medical devices, civil aviation, vehicles and railways; biometric identification; access to essential private and public services (including issues such as evaluation of the creditworthiness of natural persons); recruitment and employees’ management; and management of critical infrastructure.

There is also regulation of “low/limited” risk applications, with only those with “minimal” risk escaping mandatory new obligations (for those the draft Regulation includes, however, the option to adhere to voluntary codes of conduct).

Broad Scope of Activity Caught

Compliance will be made more difficult by the breadth of the proposed definition of “AI systems”, which aims to be technologically neutral and future proof. For example, it includes software that is developed with AI techniques and approaches, such as supervised, unsupervised and reinforcement machine learning; and statistical approaches, search and optimisation methods.

Wide Range of Participants Caught

The draft Regulation imposes different requirements on a variety of participants in the AI value chain, and there may be scope for some uncertainty and fluidity as to which category particular businesses fall into.

The bulk of the proposed obligations are imposed on “providers” and “users” of AI, but these terms are defined broadly. Therefore, “providers” include those who develop an AI system, or have an AI system developed and placed on the market or put into service. But in addition, if any user, distributor, importer or other third party modifies the intended purpose of a high-risk AI system, then they may be considered a “provider” and subject to the obligations imposed on providers. “User” is defined as any natural or legal person, public authority, agency or other body using an AI system under its authority.

In addition, other obligations are imposed on product manufacturers, importers, distributors and authorised representatives.

In recognition of the compliance burden this will impose on businesses, some concessions are made for Small and Medium-Sized Enterprises and start-ups, including by setting up AI regulatory “sandboxes” (i.e., controlled environments to test innovative technologies for a limited time).

Extraterritorial Scope

There is extraterritorial scope. The draft Regulation applies to users of AI in the EU, and to any provider placing AI on the market or putting AI into service in the EU, regardless of where that provider is established. Further, it applies to any providers and users of AI that are located outside the EU, “where the output produced by the AI system is used in the EU”.

AI systems developed or used exclusively for military purposes are excluded from scope.

Requirements for “High-Risk” AI Systems

The draft Regulation will impose a raft of mandatory requirements for high-risk AI systems and related obligations on providers and users as well as other key participants in the AI value chain.

These obligations include: (i) establishing a risk management system for the lifecycle of high-risk AI; (ii) meeting the specified quality criteria in the training and testing data; (iii) creating technical documentation before the AI system is placed on the market or put into service; (iv) enabling AI systems to automatically record events (logs) which must conform to certain standards; (v) ensuring transparency in the design and development of the systems so that users can interpret their output; (vi) enabling AI systems to be overseen by natural persons; and (vii) achieving accuracy, robustness and cybersecurity.

Further, high-risk AI systems will have to undergo a conformity assessment, i.e., a process of verifying whether the new requirements for such systems have been fulfilled. The type of conformity assessment varies depending on the AI system: it may be by way of internal control; or carried out by existing bodies for those systems that are currently regulated under other EU laws (e.g., machinery, lifts, toys and medical devices); or by newly designated bodies, in the case of biometric identification for example.

Certain high-risk AI systems will need to be registered (and specified information provided) in a newly created public EU database.

Requirements for “Low Risk” AI Systems

Certain AI systems that pose “low risk” will also be required to meet new obligations, where there is a clear risk of manipulation, for example by use of chatbots or “deep fakes”. Providers and users of such AI will be required to meet certain transparency standards to ensure that natural persons are aware that they are interacting with an AI system.

Enforcement

It is envisaged that each EU Member State will designate one or more national competent authorities, and among them one national supervisory authority, for the purposes of supervising the application and implementation of the new rules. At EU level, a newly established European Artificial Intelligence Board which will work with the Commission to ensure that the AI Regulation is effectively implemented and applied as well as assisting with new and emerging issues. The proposed fines for non-compliance are significant: breach of certain articles may result in fines of up to 6% of annual global turnover (or 30,000,000 euro (approx. 36,200,000 US dollars), whichever is higher), with other breaches carrying a penalty of up to 4% of annual global turnover (or 20,000,000 euro (approx. 24,140,000 US dollars), whichever is higher)”.

To hear about the practical steps that can be taken now, as well to discuss further the impact of the Regulation, please join us at our upcoming webinar.


Contact Information

If you have any questions concerning this alert, please contact:

Justin Williams
Email
London
+44 20.7012.9660

Natasha G. Kohne
Email
San Francisco
+1 415.765.9505

Michelle A. Reed
Email
Dallas
+1 214.969.2713
Jenny Arlington
Email
London 
+44 20.7012.9631

Share This Insight

Attachments

© 2024 Akin Gump Strauss Hauer & Feld LLP. All rights reserved. Attorney advertising. This document is distributed for informational use only; it does not constitute legal advice and should not be used as such. Prior results do not guarantee a similar outcome. Akin is the practicing name of Akin Gump LLP, a New York limited liability partnership authorized and regulated by the Solicitors Regulation Authority under number 267321. A list of the partners is available for inspection at Eighth Floor, Ten Bishops Square, London E1 6EG. For more information about Akin Gump LLP, Akin Gump Strauss Hauer & Feld LLP and other associated entities under which the Akin Gump network operates worldwide, please see our Legal Notices page.