On February 2, 2025, the European Union’s (EU) Artificial Intelligence Act (AI Act) will start to apply in phases. This alert summarizes the new obligations that will apply as of February 2, 2025. It also indicates when companies can expect the first enforcement actions, and what the enforcement regime will look like. For more information about the scope and requirements of the AI Act, please see our 10 Things You Should Know About the EU AI Act.

New Obligations for Companies Offering or Using AI Tools in the EU

On February 2, 2025, the AI Act’s provisions on AI literacy and prohibited AI uses will become applicable:

  • AI literacy. Article 4 of the AI Act requires all providers and deployers of AI systems to ensure their staff have a sufficient level of knowledge and understanding about AI, including the opportunities and risks that AI presents. This requirement applies to all companies that use AI, even in a low-risk manner. In practice, this typically means implementing AI governance policies and AI training programs for staff.
  • Prohibited AI. Article 5 of the AI Act prohibits certain uses of AI that are considered to raise unacceptable risks, such as AI systems that manipulate or exploit individuals, perform social scoring, and infer individuals’ emotions in the areas of workplace or education institutions. Please see our 10 Things You Should Know About the EU AI Act for details on the prohibited uses of AI. The ban applies to companies offering such AI systems as well as companies using such AI systems. The European Commission (EC) is expected to issue guidelines on prohibited AI practices early this year.

A Complex Web of National Enforcement Regimes Combined with EU-Level Enforcement

The AI Act requires each EU country to identify the competent regulators to enforce it. EU countries have leeway in how they structure their national enforcement. Some countries, like Spain, have taken a centralized approach by establishing a new dedicated AI agency (the Spanish AI Supervisory Agency) that will be responsible for overseeing compliance with the AI Act across all sectors. Other countries may follow a decentralized model where multiple existing regulators will have responsibility for overseeing compliance in various sectors (e.g., a Health and Safety Authority could oversee use of AI in the health sector). In addition, other regulators may supervise compliance with fundamental rights-related obligations. In many countries the enforcement structure is still unclear as EU countries have until August 2, 2025, to designate the competent authorities.

Moreover, EU countries are given leeway to determine the sanctions that will apply. The AI Act provides three thresholds for EU countries to consider, depending on the nature of the violation:

  1. Engaging in a prohibited AI practice may result in a fine of up to EUR 35 million or seven percent of worldwide annual turnover.
  2. Noncompliance with the obligations on high-risk AI could result in a fine as high as EUR 15 million or three percent of worldwide annual turnover.
  3. Fines for the supply of incorrect, incomplete, or misleading information to national competent authorities in response to a request could result in fines of up to EUR 7.5 million or one percent of worldwide annual turnover.  

Contrary to other well-known EU digital regulations like the General Data Protection Regulation, the AI Act does not provide for a one-stop-shop mechanism that centralizes enforcement in cross-border matters. However, the AI Act provides that the national competent authorities will meet in a new European Artificial Intelligence Board to coordinate enforcement actions.

In practice, companies may need to assess a myriad of local laws to understand their exposure to national regulators and risks of sanctions. The only exception is for providers of general-purpose AI (GPAI) models, which are subject to centralized enforcement by the EC. The EC may impose a fine on providers of GPAI models of up to EUR 15 million or three percent of worldwide annual turnover for noncompliance with the AI Act. The EC is currently working with the industry to develop a Code of Practice. Providers of GPAI models will be able to demonstrate compliance with the AI Act by adhering to this Code.

Timing for First Enforcement Actions

EU countries have until August 2, 2025, to set up their national enforcement regimes. The first enforcement actions are therefore expected for the second half of 2025. Companies should use the first half of 2025 to implement a strong AI governance strategy and take the necessary steps to remediate any compliance gaps.

For more information on how to ensure your AI systems and models comply with the EU AI Act, please contact Cédric BurtonLaura De BoelYann Padova, or Nikolaos Theodorakis from Wilson Sonsini’s data, privacy, and cybersecurity practice.

Wilson Sonsini’s AI Working Group assists clients with AI-related matters. Please contact Laura De BoelManeesha MithalManja Sachet, or Scott McKinney for more information.

Matthew Nuding and Jessica O’Neill contributed to the preparation of this Alert.

Contributors