On December 8, 2023, the EU finally agreed on the world’s first comprehensive legal framework on AI: the AI Act. EU lawmakers reached a political agreement on a series of controversial issues after record-long negotiations. They are expected to formally adopt the agreed text within the next couple of months. If adopted, the AI Act will ban certain AI systems, regulate general purpose AI (GPAI), impose heavy obligations on high-risk AI systems, subject to high fines, and support innovation through regulatory “sandboxes.” The AI Act will have an extraterritorial reach. Being the first law of its kind globally, the AI Act has the potential to establish a benchmark for AI regulation in other regions, just as the EU General Data Protection Regulation (GDPR) has accomplished.
Key Points of the Political Agreement
While the details of the agreement are yet to be disclosed, official press releases indicate that the highlights of the future AI Act would include:
- New Rules for GPAI Providers of GPAI systems and GPAI models will be subject to transparency obligations, for example, to provide documentation to those who integrate the GPAI model into their AI systems. Providers of GPAI models will also need to comply with EU copyright laws when training the model and publish a summary of the content used to train the model.
The European Commission (EC) will designate more powerful GPAI models that could pose “systemic risks.” Providers of such models will be subject to additional obligations. In particular, they will have to conduct model evaluations and adversarial testing, assess and mitigate systemic risks, and report to the EC on serious incidents. Codes of practice will be developed to operationalize these new obligations. - Wider List of Prohibited AI Systems. AI systems considered to pose an unacceptable threat to fundamental rights will be banned outright. The EU negotiators agreed to ban the following AI systems: i) AI systems that manipulate or exploit individuals (e.g., toys using voice assistance encouraging dangerous behavior of minors); ii) AI systems that perform social scoring; iii) the untargeted scrapping of facial images from the internet or CCTV footage; iv) some biometric systems, for example emotion recognition systems used at the workplace or in educational institutions, or systems that categorize people to infer sensitive data, such as sexual orientation or religious beliefs, and v) certain applications of predictive policing. In addition, facial recognition for law enforcement purposes in publicly accessible areas will be allowed only in very narrow cases and subject to additional safeguards. The list of prohibited AI systems is broader than in the initial draft of the AI Act. For instance, it extends the ban on social scoring by governments also to the private sector.
- Additional Obligations for High-Risk AI Systems. Most obligations of the AI Act fall on providers of high-risk AI systems (e.g., AI used in biometric identification systems, medical devices, recruitment, determining access to education). For examples of such obligations, please see our FAQ, “10 Things You Should Know About the EU Artificial Intelligence Act,” here. The novelty under the new deal is that providers of high-risk AI systems will now also need to carry out a fundamental rights impact assessment (FRIA) to analyze the risks for individuals before releasing the AI system on the EU market. Another noteworthy development is that AI systems used to influence elections will also be classified as high-risk.Â
- More Requirements in Relation to Specific Transparency Risks. AI systems that generate content (such as chatbots) will be subject to certain transparency obligations. Deep fakes and other AI-generated content will have to be labeled as such, and users will need to be informed when biometric categorization or emotion recognition systems are being used. Synthetic audio, video, text, and images content will need to be marked in a machine-readable format and be detectable as artificially generated or manipulated.
- Supporting Innovation. The AI Act aims to foster innovation in the EU. To that end, regulatory sandboxes will be created. They will establish a controlled environment for the development, testing, and validation of innovative AI systems, and allow for testing of innovative AI systems in real world conditions.
- Enforcement of the AI Act. The AI Act will be enforced by AI regulators in each EU country. National AI regulators will have the power to impose fines of up to €35 million or seven percent of the company’s global annual turnover—whichever is higher—for the banned AI applications, while the maximum fine for other obligations under the AI Act will be up to €15 million or three percent. Start-ups and small and medium companies will be able to benefit from more proportionate caps on fines. In addition, citizens will have a right to launch complaints and receive explanations about decisions based on high-risk AI systems.
Two governance bodies will be established at the EU level: i) the AI Office, which will be part of the EC, and ii) the AI Board, composed of EU countries’ representatives. The AI Office will enforce rules on GPAI at the EU level and will be advised by a scientific panel of independent experts. The AI Board will serve as a coordination platform and oversee a consistent application of the AI Act.
Looking Forward
The political agreement on the AI Act reached on December 8 is a pivotal, albeit not final step for the adoption of the law, which will now have to be formally passed by the European Parliament and the Council of the EU. The EC services will now work on clarifying the technical details not addressed at this stage. The adoption of the AI Act is to be expected in early 2024. Companies will then have two years to prepare before the obligations start to apply. During this transition period, the EC will launch an AI Pact, inviting companies to voluntarily implement key obligations of the AI Act ahead of the legal deadlines.
Wilson Sonsini is closely following the developments in the AI global regulatory landscape and has created a group of attorneys dedicated to advising clients on AI. For more information on the AI Act and other matters related to AI and machine learning, please contact Laura De Boel, Cédric Burton, Yann Padova, Nikolaos Theodorakis, or any other attorney from Wilson Sonsini’s privacy and cybersecurity practice or AI and machine learning practice.
As mentioned above, you can read more about the AI Act in our FAQ, “10 Things You Should Know About the EU Artificial Intelligence Act,” here. Also, you can read our client alerts on the EU Council’s position here and the EU Parliament’s position here. You can also stay up to date on the legislative process by registering to our AI Working Group Quarterly Newsletter here. Register here for the AI-volution webinar that will take place on December 19, 2023.
Rossana Fol, Roberto Yunquera Sehwani, and Karol Piwonski contributed to the preparation of this Wilson Sonsini Alert.