EU Takes Steps Toward Regulating Use of Artificial Intelligence with the AI Act
See all Insights

EU Takes Steps Toward Regulating Use of Artificial Intelligence with the AI Act

Brownstein Client Alert, Jan. 8, 2024

On Dec. 8, 2023, European Parliament negotiators reached a provisional agreement on the Artificial Intelligence Act (AI Act) to regulate the use of AI in the European Union (EU), more than five years after initial calls were made by the Council of the European Union for AI rules and regulations. It is expected that the AI Act (the Proposed Act) will set worldwide standards for regulating the use of AI in much the same manner that the General Data Protection Regulation (GDPR) did for privacy.

The Proposed Act adopts a “technology neutral” risk-based approach in which AI systems are classified and regulated based on risk to citizens, individually or collectively. Risk categories include “unacceptable risk” (banned), “high-risk” (subject to stringent rules),
“limited risk” (transparency requirements), and “low or minimal risk” (minimal to no regulation).

AI systems with unacceptable risks are those for which use contravenes European values and principles, including “human dignity, freedom, democracy, equality, the rule of law, and respect for human rights.” Examples of AI systems that, if deployed, would create “unacceptable risk” include (i) biometric categorization systems that rely on sensitive characteristics (race, sexual orientation, political, religious, philosophical beliefs); (ii) untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases; (iii) emotion recognition in the workplace and educational institutions; (iv) social scoring based on social behavior or personal characteristics; (v) AI systems that manipulate human behavior to circumvent their free will; and (vi) AI used to exploit the vulnerabilities of people due to their age, disability, social or economic situation. The AI Act includes limited exceptions for use of biometric identification technologies by law enforcement for certain serious crimes, such as terrorism.

High-risk” systems include those AI systems that negatively affect safety or fundamental rights. This tier is proposed to be divided into two subcategories: (1) AI systems used for products (e.g., toys, aviation, medical devices); and (2) software-focused AI systems leveraged by particular critical industries or public services including infrastructure, education, employment, law enforcement, border control, and assistance in legal interpretations of the law. For high-risk systems, the Proposed Act requires entities to mitigate risks, conduct a fundamental rights impact assessment, use high-quality data sets and ensure capability for ongoing human oversight. Citizens will also have the right to launch complaints with respect to high-risk AI systems based on their impact on their rights.

General purpose AI systems (GPAI) designed to interact with humans, such as chatbots or other assistants, are considered “limited risk.” GPAIs are proposed to be subject to transparency obligations, such as notifying the public when interacting with chatbots and/or biometric or emotion recognition systems. Entities engaged in training of foundational or general purpose models for use by others will also be required to provide information about the content used for training. GPAI systems configured for generative uses (e.g., ChatGPT and other generative AI systems) will be required to disclose that generated content is generated by AI (not humans) and will be required to architect their systems to prevent generation of “illegal” content or duplication of copyrighted material. The Proposed Act also requires providers of such foundational models to “provide all necessary information for downstream providers to comply with their [respective] obligations under [the Proposed Act].”

In general, the Proposed Act aims to foster innovation while protecting citizens’ rights and promoting democracy. However, the final agreement has frustrated consumer protection advocates and business interests alike. Privacy and human rights groups argue that the Proposed Act does not go far enough to protect citizens from all potential impacts of AI. Many business and educational organizations have expressed concerns that research and development efforts may be throttled by the Proposed Act. French officials, in particular, criticized the Proposed Act for hampering innovation, alleging the Proposed Act gives an advantage to American companies.

The Proposed Act will go into effect gradually after its full text is adopted, which is likely to occur in early 2024. For example, certain prohibitions will come into effect six months after adoption, while provisions pertaining to GPAI are slated to take effect 12 months after adoption. The Proposed Act will likely go into full effect in early 2026—two years after formal adoption by the European Parliament.

While the effects of the Proposed Act will likely not be observable immediately, companies should familiarize themselves with the act and its complexities. Violations of the Proposed Act carry fines in the millions of euros. If the effects of the GDPR were any indication, compliance will be key to avoid being shut out of the European market.


This document is intended to provide you with general information regarding the EU's Artifical Intelligence Act. The contents of this document are not intended to provide specific legal advice. If you have any questions about the contents of this document or if you need legal advice as to an issue, please contact the attorneys listed or your regular Brownstein Hyatt Farber Schreck, LLP attorney. This communication may be considered advertising in some jurisdictions. The information in this article is accurate as of the publication date. Because the law in this area is changing rapidly, and insights are not automatically updated, continued accuracy cannot be guaranteed.

Recent Insights

Loading...