In Europe, recent advances in artificial intelligence (AI) have given rise to intense debate over how this technology should be regulated. Companies that have developed AI tools, or who are considering implementing AI, should assess the implications of recent legislative developments and regulatory action. This alert discusses the most recent legislative and regulatory developments in Europe and identifies key steps companies should take in light of these developments.

Navigating the Regulatory Landscape in Europe

2023 has so far been a productive year for AI regulatory and legislative action in Europe. We summarize the key developments here:

  • EU is finalizing first-ever omnibus AI legislation. The European Commission (EC) issued its proposal for an AI Act in April 2021. On December 6, 2022, the Council of the EU (the Council) adopted its amendments to the draft act. The EU Parliament adopted its position on June 14, 2023, after having reached a political agreement last April. This will kick off the final stage of the EU legislative process, i.e., the negotiations among Parliament, Council, and the EC (the so-called “trilogues”). Once the AI Act comes into force, it will differentiate between “prohibited” and “high-risk” AI systems. The former will be banned because of their (assumed) unacceptable risks, while the latter may be operated provided companies comply with certain requirements (such as providing human oversight, training and testing, transparency, and security).1 The exact requirements and their scope will be subject to intense debate during the trilogues. In particular, Parliament is pushing for new requirements for foundation models and generative AI, with the aim of addressing concerns over the rapid spread of these technologies in the EU.
  • UK follows principles-based approach. In contrast with the EU’s approach, the UK government announced in a White Paper in March 2023 that it will not seek to introduce specific legislation governing the use of AI. Instead, it will introduce a nonstatutory regulatory framework based around five core principles. Regulators will be required to have regard to these principles when enforcing the law, and when issuing guidance on best practice.
  • Compliance with EU/UK data protection legislation. In Europe, companies must consider the application of the General Data Protection Regulation (GDPR) when using AI tools. Data protection regulators including the Spanish Supervisory Authority (AEDP), the French Supervisory Authority (CNIL) and the Information Commissioner’s Office (ICO)2 have issued guidance on how to achieve GDPR compliance when using AI, with the ICO putting a particular emphasis on the importance of transparency and explainability. They also provide an overview of control objectives to ensure compliance with the GDPR. The ICO also released a blog in April 2023 outlining specific questions that companies should consider when developing or using generative AI. The CNIL also focuses on generative AI in the Action Plan for AI that it published on May 16, 2023. In terms of regulatory enforcement, the Italian Garante has taken the lead with several enforcement actions against generative AI services. The European Data Protection Board has created a task force at the EU level, and other EU data protection authorities (such as France, Ireland, Spain and the Netherlands) have launched inquiries into generative AI. These regulatory actions highlight the complexity of aligning generative AI with GDPR obligations such as legal basis, transparency, and individuals’ rights.
  • Compliance with EU/UK sector-specific requirements. Companies should also consider sector-specific requirements. For instance, where the use of AI may have an impact on individuals (for example, because it makes decisions in relation to employees or consumers), legislation governing equality of treatment should be considered.
  • EU is preparing voluntary AI standards. The EU has launched several initiatives to create AI standards for industry to commit to voluntarily. For instance, at the recent EU-U.S. Trade and Technology Council summit, the EC announced that the EU and U.S. are drafting a voluntary AI Code of Conduct that industry could sign up to. The EC is also developing a voluntary “AI Pact”, inviting AI companies operating in the EU to begin complying with the draft AI Act’s obligations before they come into force.

Top Steps to Consider When Using AI in Europe

  • Identify applicable laws and regulations. Identifying how and why AI will be deployed is an important first step in identifying the legal requirements that may apply. For example, if AI is to be used in a consumer-facing context, it may be necessary to consider issues arising under the GDPR and consumer rights legislation. Organizations using AI may need to update their privacy policies and consent forms, and assess how to handle individuals’ rights to the personal data that is processed through the AI. They should also consider content moderation requirements, in particular when offering services to children. When AI is deployed with respect to employees (for example in the field of recruitment), equalities legislation could be particularly relevant in view of avoiding bias or discrimination. In addition, organizations should monitor developments in relation to the creation of voluntary AI standards, as described above.
  • Consider how your vendor can help you achieve compliance. If the AI solution is to be bought “off the shelf,” carry out due diligence on the vendor. For example, if the vendor claims no personal data is used to train their models, verify how the vendor ensures this in practice. Also, vendors should provide information about how their AI systems function so that their customers can provide the necessary explanations to the individuals concerned.
  • Implement internal controls to manage AI risk. Companies should have various controls in place to manage their AI risk, such as processes to i) audit algorithm outputs for accuracy; ii) ensure human control of sensitive processes; and iii) keep algorithms bias-free. The new ISO/IEC standard on AI risk management3 may be a helpful tool to identify and manage AI risk.

Organizations should consider undertaking a thorough review of their compliance practices before using AI-powered solutions. They should consider the applicable legal framework and voluntary AI standards. In addition, with the EU’s AI Act just around the corner, organizations should begin assessing the potential impact that it will have on their business.

Based in the heart of Silicon Valley, we have been helping clients navigate AI for years. For more information on the EU AI Act and other matters related to AI and machine learning, please contact Cédric BurtonLaura De BoelManeesha Mithal, or any other attorney from Wilson Sonsini’s privacy and cybersecurity practice or AI and machine learning practice.

Nikolaos TheodorakisLaura BrodahlTom EvansMina Gholiof, and Michael Kern contributed to the preparation of this client alert.


[1]For a summary of the AI Act, please consult our Fact Sheet on the Draft AI Act.

[2]Draft for public consultation (which has been closed on May 1, 2020).

[3]ISO/IEC 23894:2023 Information technology – Artificial intelligence – Guidance on risk management.