On March 13, 2024, the European Parliament (EP) approved the latest draft of the European Union’s (EU) Artificial Intelligence Act (AI Act). Following this vote, the text will be sent to the Council of the EU (Council) for formal approval, after which the AI Act will officially become law. Once the AI Act starts to apply, it will introduce a swathe of new obligations for companies providing and using AI systems and general-purpose AI (GPAI) models in the EU, subject to hefty fines of up to EUR 35 million or seven percent of the total worldwide annual turnover, whichever is higher.

What Does This Vote by the European Parliament Mean for the EU AI Act?

This vote by the EP does not mean that the AI Act is now finally adopted. However, it is the penultimate step before it becomes law. The last step before the AI Act can pass into law is for the Council to formally approve the draft. No amendments in substance are expected and the AI Act will likely progress to enter into law by the summer.

What Will Happen Once the EU AI Act Becomes Law?

  • The AI Act will start to apply in phases. The first AI Act provisions to kick in will be the ones which prohibit certain applications of AI (e.g., AI systems that exploit individuals’ vulnerabilities, untargeted scraping of facial images from the internet or CCTV footage): these provisions will start to apply six months after the law enters into force, which likely means before the end of the year. The second set of provisions to apply will be those imposing requirements in relation to GPAI: these provisions will start to apply one year after the AI Act’s entry into force, so likely in Q2 2025. Most of the rules for high-risk AI systems and AI systems with specific transparency risk will start to apply two years after the AI Act enters into force, so likely in Q2 2026.
  • Additional grace period for AI systems and GPAI already offered in the EU. There will be more time to comply with the AI Act regarding AI systems and GPAI that are already on the EU market.
    • Operators of high-risk AI systems that are offered in the EU before the corresponding rules for high-risk AI start to apply will only need to comply with the AI Act in the event of a significant design change. As an exception to this, if the high-risk AI system offered in the EU is intended to be used by public authorities, the providers and deployers will need to comply with the rules within six years as of the entry into force of the AI Act, regardless of whether there has been a significant design change or not.
    • Providers of GPAI models already offered in the EU will have an additional two years to comply with the requirements, i.e., a total of three years as of the entry into force of the AI Act, so likely by Q2 2027.

Who Will Enforce the EU AI Act?

Rules on GPAI will be enforced by the newly created European AI Office of the European Commission, which is expected to become the EU center of AI expertise and to issue pan-EU guidance on AI. Enforcement of the rules on AI systems will primarily be at the national level. Each EU country will need to identify the competent regulators to enforce the AI Act within one year of the AI Act becoming law. Some countries have already announced their intentions, e.g., Spain has already created a distinct AI authority.

For more information about the scope and requirements in the AI Act, please see our client alert on last December’s political agreement on the AI Act, available here.

For more information, or if you have any questions regarding the AI Act, please contact Laura De BoelCédric BurtonYann Padova, or Nikolaos Theodorakis from Wilson Sonsini’s privacy and cybersecurity practice.

Wilson Sonsini’s AI Working Group assists clients with AI-related matters. Please contact Laura De BoelManeesha MithalManja Sachet, or Scott McKinney for more information.

Rossana FolRoberto Yunquera Sehwani, and Hattie Watson contributed to the preparation of this blog post.