A significant milestone in the legislative process of the AI Act has been reached with the vote of the European Parliament (EP) on June 14, 2023. The text now enters a new phase, during which all three EU institutions (the Council of the EU (Council), the EU Commission (EC), and the EP) will work towards an agreement on the final text.
The fast-moving developments and growth of AI have garnered significant political pressure, pushing the EU institutions to finalize the text. Spain, which will assume the presidency of the Council as of July 1, 2023, has stated that finishing the AI Act will be a top priority on its agenda. This alert summarizes the key changes proposed by the EP and what to expect next.
The legislative process to pass the AI Act started more than two years ago when the EC published its proposal (the “draft AI Act”) in April 2021. The EC adopted a horizontal, cross-sector, and risk-based approach to regulating AI. On December 6, 2022, the Council finalized its proposed amendments to the draft AI Act and followed a more business-friendly approach to regulating AI than the EC’s proposal. In contrast, The EP has now taken a strict, if not overly cautious, approach, proposing a number of significant amendments and additional obligations.
5 Key Changes Proposed by the EP
- Introduction of general principles applicable to all AI systems. The EP introduces a new set of six high-level general principles applicable to all AI Systems. The general principles include 1) human agency and oversight, 2) technical robustness and safety, 3) privacy and data governance, 4) transparency, 5) diversity, nondiscrimination and fairness, and 6) social and environmental well-being. The precise legal consequences of the general principles remain, however, ambiguous.
- Expanded list of prohibited AI practices. The EP’s position significantly expands the list of prohibited AI practices to include, for example, a ban of predictive policing, a ban of the creation or expansion of facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage, and a ban of emotion recognition systems in particular areas (such as in law enforcement, the workplace, and educational institutions). In doing so, the EP has closely followed the joint opinion published by the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) in 2021, in which they called for a substantial expansion of the list of prohibited AI practices.
- Expanded list of high-risk AI systems. The EP also significantly broadened the scope of AI Systems to be potentially designated as high-risk AI systems by the EC (listed in Annex III of the draft AI Act). It will now cover AI systems “where these pose a significant risk of harm to health and safety, or an adverse impact on fundamental rights, to the environment, or to democracy and the rule of law” when that risk is “equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred” to in said Annex.
Additionally, the EP has already suggested specific AI systems to be designated as high-risk systems, including 1) AI systems intended to be used to influence the outcome of an election or the voting behavior of natural persons and 2) AI systems to be used by very large online platforms (VLOPs)1 in their systems to recommend user-generated content.
- Introduction of a specific and broad regulation of “foundation models.” The EP proposes new and specific obligations on providers of “foundation models,” a fast-growing field in AI. According to the EP, a foundation model means “an AI system model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks.” Its providers will face additional obligations such as:
- assessing and mitigating all foreseeable risks to a broad range of issues (such as health, safety, fundamental rights, the environment, democracy, and the rule of law);
- registering their models in an EU database; and
- if the foundation models are used to generate text, images, audio, or video (“generative AI”), they will have to develop, design, and train their models in a way to ensure adequate safeguards against the generation of illegal content.
- Strengthening individuals’ rights and increasing companies’ liability. The EP also suggested a new chapter on remedies that will afford individuals more and General Data Protection Regulation-inspired rights. The position includes a right to lodge a complaint with a national supervisory authority, the right to an effective judicial remedy, and a right to explanation in case of a decision based on a high-risk AI system with legal effects or significant adverse impacts on fundamental rights. The EP has also substantially increased the potential fines for noncompliance with prohibited AI practice, ranging up to €40 million or, if the offender is a company, up to 7 percent of the annual turnover, whichever is higher.
Next Steps: An Agreement Before the End of the Year?
The three EU institutions aim at reaching an agreement by the end of 2023, ahead of the next European Parliament elections in June 2024. Discussions on the AI Act are scheduled for July 18, September 26, and October 26. It is unclear whether a political agreement will be reached by the end of the year, but in light of the current political pressure, all parties involved will make strong efforts to reach such an agreement.
Companies that use, develop, or implement AI solutions should consider reviewing the existing drafts to assess their potential impact on their operations.
For more information on the EU AI Act and other matters related to AI and machine learning, please contact Cédric Burton, Laura De Boel, Yann Padova, Maneesha Mithal, or any other attorney from Wilson Sonsini’s privacy and cybersecurity practice or AI and machine learning practice.
Sebastian Thess contributed to the preparation of this alert.