In 2021, countries in EMEA continued to focus on the legal constructs around artificial intelligence (“AI”), and the momentum continues in 2022. The EU has been particularly active in AI—from its proposed horizontal AI regulation to recent enforcement and guidance—and will continue to be active going into 2022. Similarly, the UK follows closely behind with
Lisa Peets leads the intellectual property and technology and media groups in the firm’s London office. Ms. Peets divides her time between London and Brussels, and her practice embraces legislative advocacy, trade and IP enforcement. In this context, she has worked closely with leading multinationals in a number of sectors, including many of the world’s best-known software and hardware companies.
On behalf of her clients, Ms. Peets has been actively engaged in a wide range of law reform efforts in Europe, on multilateral, regional and national levels. This includes advocacy on EU and national initiatives relating to e-commerce, copyright, patents, data protection, technology standards, compulsory licensing, IPR enforcement and emerging technologies. Ms. Peets also counsels clients on trade related matters, including EU export controls and sanctions rules and WTO compliance.
In the IP enforcement space, Ms. Peets coordinates a team of lawyers and Internet investigators who direct civil and criminal enforcement actions in countries throughout Europe and who conduct global notice and takedown programs to combat Internet piracy.
Ms. Peets is a member of the European Commission’s Expert Group on reform of the IP Enforcement Directive.
In April 2021, the European Commission released its proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the “Regulation”), which would establish rules on the development, placing on the market, and use of artificial intelligence systems (“AI systems”) across the EU. The proposal, comprising 85 articles and nine annexes, is part of a wider package of Commission initiatives aimed at positioning the EU as a world leader in trustworthy and ethical AI and technological innovation.
The Commission’s objectives with the Regulation are twofold: to promote the development of AI technologies and harness their potential benefits, while also protecting individuals against potential threats to their health, safety, and fundamental rights posed by AI systems. To that end, the Commission proposal focuses primarily on AI systems identified as “high-risk,” but also prohibits three AI practices and imposes transparency obligations on providers of certain non-high-risk AI systems as well. Notably, it would impose significant administrative costs on high-risk AI systems of around 10 percent of the underlying value, based on compliance, oversight, and verification costs. This blog highlights several key aspects of the proposal.…
On February 11, 2021, the European Commission launched a public consultation on its initiative to fight child sexual abuse online (the “Initiative”), which aims to impose obligations on online service providers to detect child sexual abuse online and to report it to public authorities. The consultation is part of the data collection activities announced in the Initiative’s inception impact assessment issued in December last year. The consultation runs until April 15, 2021, and the Commission intends to propose the necessary legislation by the end of the second quarter of 2021.
Continue Reading European Commission Launches Consultation on Initiative to Fight Child Sexual Abuse
On 17 December 2020, the Council of Europe’s* Ad hoc Committee on Artificial Intelligence (CAHAI) published a Feasibility Study (the “Study”) on Artificial Intelligence (AI) legal standards. The Study examines the feasibility and potential elements of a legal framework for the development and deployment of AI, based on the Council of Europe’s human rights standards. Its main conclusion is that current regulations do not suffice in creating the necessary legal certainty, trust, and level playing field needed to guide the development of AI. Accordingly, it proposes the development of a new legal framework for AI consisting of both binding and non-binding Council of Europe instruments.
The Study recognizes the major opportunities of AI systems to promote societal development and human rights. Alongside these opportunities, it also identifies the risks that AI could endanger rights protected by the European Convention on Human Rights (ECHR), as well as democracy and the rule of law. Examples of the risks to human rights cited in the Study include AI systems that undermine the right to equality and non-discrimination by perpetuating biases and stereotypes (e.g., in employment), and AI-driven surveillance and tracking applications that jeopardise individuals’ right to freedom of assembly and expression.…
In April 2019, the UK Government published its Online Harms White Paper and launched a Consultation. In February 2020, the Government published its initial response to that Consultation. In its 15 December 2020 full response to the Online Harms White Paper Consultation, the Government outlined its vision for tackling harmful content online through a new regulatory framework, to be set out in a new Online Safety Bill (“OSB”).
This development comes at a time of heightened scrutiny of, and regulatory changes to, digital services and markets. Earlier this month, the UK Competition and Markets Authority published recommendations to the UK Government on the design and implementation of a new regulatory regime for digital markets (see our update here).
The UK Government is keen to ensure that policy initiatives in this sector are coordinated with similar legislation, including those in the US and the EU. The European Commission also published its proposal for a Digital Services Act on 15 December, proposing a somewhat similar system for regulating illegal online content that puts greater responsibilities on technology companies.
Key points of the UK Government’s plans for the OSB are set out below.…
On December 15, 2020, the European Commission published its proposed Regulation on a Single Market for Digital Services, more commonly known as the Digital Services Act (“DSA Proposal”). In publishing the Proposal, the Commission noted that its goal was to protect consumers and their fundamental rights online, establish an accountability framework for online services, and foster innovation, growth and competitiveness in the single market. On the same day, the Commission also published its proposal for a Digital Markets Act (“DMA”), which would impose new obligations and restrictions on online services that act as “designated gatekeepers” (see our analysis of the DMA Proposal here).
Continue Reading EU Publishes Proposal For Digital Services Act
On 25 November 2020, the European Commission published a proposal for a Regulation on European Data Governance (“Data Governance Act”). The proposed Act aims to facilitate data sharing across the EU and between sectors, and is one of the deliverables included in the European Strategy for Data, adopted in February 2020. (See our previous blog here for a summary of the Commission’s European Strategy for Data.) The press release accompanying the proposed Act states that more specific proposals on European data spaces are expected to follow in 2021, and will be complemented by a Data Act to foster business-to-business and business-to-government data sharing.
The proposed Data Governance Act sets out rules relating to the following:
- Conditions for reuse of public sector data that is subject to existing protections, such as commercial confidentiality, intellectual property, or data protection;
- Obligations on “providers of data sharing services,” defined as entities that provide various types of data intermediary services;
- Introduction of the concept of “data altruism” and the possibility for organisations to register as a “Data Altruism Organisation recognised in the Union”; and
- Establishment of a “European Data Innovation Board,” a new formal expert group chaired by the Commission.
On 11 November 2020, the European Data Protection Board (“EDPB”) issued two draft recommendations relating to the rules on how organizations may lawfully transfer personal data from the EU to countries outside the EU (“third countries”). These draft recommendations, which are non-final and open for public consultation until 30 November 2020, follow the EU Court of Justice (“CJEU”) decision in Case C-311/18 (“Schrems II”). (For a more in-depth summary of the CJEU decision, please see our blog post here and our audiocast here. The EDPB also published on 24 July 2020 FAQs on the Schrems II decision here).
The two recommendations adopted by the EDPB are:
- Recommendations 01/2020 on measures that supplement transfer tools to ensure compliance with the EU level of protection of personal data (“Draft Recommendations on Supplementary Measures”); and
- Recommendations 02/2020 on the European Essential Guarantees for surveillance measures (“Recommendations on EEG”).
In this edition of our regular roundup on legislative initiatives related to artificial intelligence (AI), cybersecurity, the Internet of Things (IoT), and connected and autonomous vehicles (CAVs), we focus on key developments in the European Union (EU).…
On July 30, 2020, the UK Information Commissioner’s Office (“ICO”) published its final guidance on Artificial Intelligence (the “Guidance”). The Guidance sets out a framework for auditing AI systems for compliance with data protection obligations under the GDPR and the UK Data Protection Act 2018. The Guidance builds on the ICO’s earlier commitment to enable good data protection practice in AI, and on previous guidance and blogs issued on specific issues relating to AI (for example, on explaining decisions on AI, trade-offs, and bias and discrimination, all covered in Covington blogs).
Continue Reading UK ICO publishes guidance on Artificial Intelligence