On 11 May 2023, members of the European Parliament’s internal market (IMCO) and civil liberties (LIBE) committees agreed their final text on the EU’s proposed AI Act. After MEPs formalize their position through a plenary vote (expected this summer), the AI Act will enter the last stage of the legislative process: “trilogue” negotiations with the European Commission, Parliament and the Council, which adopted its own amendments in late 2022 (see our blog post here for further details). European lawmakers hope to adopt the final AI Act before the end of 2023, ahead of the European Parliament elections in 2024.
In perhaps the most significant change from the Commission and Council draft, under MEPs’ proposals, providers of foundation models – a term defined as an AI model that is “trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks” (Article 3(1c)) – would be subject to a series of obligations. For example, providers would be under a duty to “demonstrate through appropriate design, testing and analysis that the identification, the reduction and mitigation of reasonably foreseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law prior and throughout development” (Article 28b(2)(a)), as well as to draw up “extensive technical documentation and intelligible instructions for use” to help those that build AI systems using the foundation model (Article 28b(2)(e)).
Providers of foundation models would be further required to meet obligations around data governance, including examining the suitability of data sources and possible biases (Article 28b(2)(b)); ensuring “appropriate levels” of performance, predictability, safety and cybersecurity (Article 28b(2)(c)); and conforming to a range of sustainability standards (Article 28b(2)(d)). They would also need to register their foundation model in an EU-wide database prior to making it available or putting it into use in the EU (Article 28b(2)(g)).
The MEP amendments also introduce specific obligations for providers of foundation models used in “generative AI” systems – defined as “AI systems specifically intended to generate with varying levels of autonomy, content such as complex text, images, audio or video” (Article 28b(4)). These include making publicly available “a sufficiently detailed summary of the use of training data protected under copyright law” (Article 28b(4)(c)).
Beyond proposing amendments relating to foundation models, the MEPs also suggested extending the list of AI uses that would be prohibited under the AI Act (Article 5) (as previously discussed in our blog post here). They also proposed amendments to criteria for “high-risk” AI systems – the systems would have to “pose a significant risk of harm to the health, safety, or fundamental rights” of individuals to be categorized in this way (Article 6(2)). Providers would be obliged to notify regulators if they did not think their systems pose a “significant risk”, with the potential for penalties to be issued if systems are put into use but are subsequently found to have been misclassified (Article 6(2a)).
The Covington team continues to monitor developments on the AI Act, and we regularly advise the world’s top technology companies on their most challenging regulatory and compliance issues in the EU and other major markets. If you have questions about the AI Act, or other tech regulatory matters, we are happy to assist with any queries.