Photo of Lisa Peets

Lisa Peets

Lisa Peets leads the Technology Regulatory and Policy practice in the London office and is a member of the firm's Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory counsel and legislative advocacy. In this context, she has worked closely with leading multinationals in a number of sectors, including many of the world’s best-known technology companies.

Lisa counsels clients on a range of EU law issues, including data protection and related regimes, copyright, e-commerce and consumer protection, and the rapidly expanding universe of EU rules applicable to existing and emerging technologies. Lisa also routinely advises clients in and outside of the technology sector on trade related matters, including EU trade controls rules.

According to the latest edition of Chambers UK (2022), "Lisa is able to make an incredibly quick legal assessment whereby she perfectly distils the essential matters from the less relevant elements." "Lisa has subject matter expertise but is also able to think like a generalist and prioritise. She brings a strategic lens to matters."

2023 is set to be an important year for developments in AI regulation and policy in the EU. At the end of last year, on December 6, 2022, the Council of the EU (the “Council”) adopted its general approach and compromise text on the proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the “AI Act”), bringing the AI Act one step closer to being adopted. The European Parliament is currently developing its own position on the AI Act which is expected to be finalized by March 2023. Following this, the Council, Parliament and European Commission (“Commission”) will enter into trilogue discussions to finalize the Act. Once adopted, it will be directly applicable across all EU Member States and its obligations are likely to apply three years after the AI Act’s entry into force (according to the Council’s compromise text).  Continue Reading EU AI Policy and Regulation: What to look out for in 2023

In 2021, countries in EMEA continued to focus on the legal constructs around artificial intelligence (“AI”), and the momentum continues in 2022. The EU has been particularly active in AI—from its proposed horizontal AI regulation to recent enforcement and guidance—and will continue to be active going into 2022. Similarly, the UK follows closely behind with

In April 2021, the European Commission released its proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the “Regulation”), which would establish rules on the development, placing on the market, and use of artificial intelligence systems (“AI systems”) across the EU. The proposal, comprising 85 articles and nine annexes, is part of a wider package of Commission initiatives aimed at positioning the EU as a world leader in trustworthy and ethical AI and technological innovation.

The Commission’s objectives with the Regulation are twofold: to promote the development of AI technologies and harness their potential benefits, while also protecting individuals against potential threats to their health, safety, and fundamental rights posed by AI systems. To that end, the Commission proposal focuses primarily on AI systems identified as “high-risk,” but also prohibits three AI practices and imposes transparency obligations on providers of certain non-high-risk AI systems as well. Notably, it would impose significant administrative costs on high-risk AI systems of around 10 percent of the underlying value, based on compliance, oversight, and verification costs. This blog highlights several key aspects of the proposal.Continue Reading European Commission Proposes New Artificial Intelligence Regulation

On February 11, 2021, the European Commission launched a public consultation on its initiative to fight child sexual abuse online (the “Initiative”), which aims to impose obligations on online service providers to detect child sexual abuse online and to report it to public authorities. The consultation is part of the data collection activities announced in the Initiative’s inception impact assessment issued in December last year. The consultation runs until April 15, 2021, and the Commission intends to propose the necessary legislation by the end of the second quarter of 2021.
Continue Reading European Commission Launches Consultation on Initiative to Fight Child Sexual Abuse

On 17 December 2020, the Council of Europe’s* Ad hoc Committee on Artificial Intelligence (CAHAI) published a Feasibility Study (the “Study”) on Artificial Intelligence (AI) legal standards. The Study examines the feasibility and potential elements of a legal framework for the development and deployment of AI, based on the Council of Europe’s human rights standards. Its main conclusion is that current regulations do not suffice in creating the necessary legal certainty, trust, and level playing field needed to guide the development of AI. Accordingly, it proposes the development of a new legal framework for AI consisting of both binding and non-binding Council of Europe instruments.

The Study recognizes the major opportunities of AI systems to promote societal development and human rights. Alongside these opportunities, it also identifies the risks that AI could endanger rights protected by the European Convention on Human Rights (ECHR), as well as democracy and the rule of law. Examples of the risks to human rights cited in the Study include AI systems that undermine the right to equality and non-discrimination by perpetuating biases and stereotypes (e.g., in employment), and AI-driven surveillance and tracking applications that jeopardise individuals’ right to freedom of assembly and expression.Continue Reading AI Update: The Council of Europe Publishes Feasibility Study on Developing a Legal Instrument for Ethical AI

In April 2019, the UK Government published its Online Harms White Paper and launched a Consultation. In February 2020, the Government published its initial response to that Consultation. In its 15 December 2020 full response to the Online Harms White Paper Consultation, the Government outlined its vision for tackling harmful content online through a new regulatory framework, to be set out in a new Online Safety Bill (“OSB”).

This development comes at a time of heightened scrutiny of, and regulatory changes to, digital services and markets. Earlier this month, the UK Competition and Markets Authority published recommendations to the UK Government on the design and implementation of a new regulatory regime for digital markets (see our update here).

The UK Government is keen to ensure that policy initiatives in this sector are coordinated with similar legislation, including those in the US and the EU. The European Commission also published its proposal for a Digital Services Act on 15 December, proposing a somewhat similar system for regulating illegal online content that puts greater responsibilities on technology companies.

Key points of the UK Government’s plans for the OSB are set out below.Continue Reading UK Government Plans for an Online Safety Bill

On December 15, 2020, the European Commission published its proposed Regulation on a Single Market for Digital Services, more commonly known as the Digital Services Act (“DSA Proposal”).  In publishing the Proposal, the Commission noted that its goal was to protect consumers and their fundamental rights online, establish an accountability framework for online services, and foster innovation, growth and competitiveness in the single market.  On the same day, the Commission also published its proposal for a Digital Markets Act (“DMA”), which would impose new obligations and restrictions on online services that act as “designated gatekeepers” (see our analysis of the DMA Proposal here).
Continue Reading EU Publishes Proposal For Digital Services Act

On 25 November 2020, the European Commission published a proposal for a Regulation on European Data Governance (“Data Governance Act”).  The proposed Act aims to facilitate data sharing across the EU and between sectors, and is one of the deliverables included in the European Strategy for Data, adopted in February 2020.  (See our previous blog here for a summary of the Commission’s European Strategy for Data.)  The press release accompanying the proposed Act states that more specific proposals on European data spaces are expected to follow in 2021, and will be complemented by a Data Act to foster business-to-business and business-to-government data sharing.

The proposed Data Governance Act sets out rules relating to the following:

  • Conditions for reuse of public sector data that is subject to existing protections, such as commercial confidentiality, intellectual property, or data protection;
  • Obligations on “providers of data sharing services,” defined as entities that provide various types of data intermediary services;
  • Introduction of the concept of “data altruism” and the possibility for organisations to register as a “Data Altruism Organisation recognised in the Union”; and
  • Establishment of a “European Data Innovation Board,” a new formal expert group chaired by the Commission.

Continue Reading AI Update: The European Commission publishes a proposal for a Regulation on European Data Governance (the Data Governance Act)

On 11 November 2020, the European Data Protection Board (“EDPB”) issued two draft recommendations relating to the rules on how organizations may lawfully transfer personal data from the EU to countries outside the EU (“third countries”).  These draft recommendations, which are non-final and open for public consultation until 30 November 2020, follow the EU Court of Justice (“CJEU”) decision in Case C-311/18 (“Schrems II”).  (For a more in-depth summary of the CJEU decision, please see our blog post here and our audiocast here. The EDPB also published on 24 July 2020 FAQs on the Schrems II decision here).

The two recommendations adopted by the EDPB are:

Continue Reading EDPB adopts recommendations on international data transfers following Schrems II decision

On July 30, 2020, the UK Information Commissioner’s Office (“ICO”) published its final guidance on Artificial Intelligence (the “Guidance”).  The Guidance sets out a framework for auditing AI systems for compliance with data protection obligations under the GDPR and the UK Data Protection Act 2018.  The Guidance builds on the ICO’s earlier commitment to enable good data protection practice in AI, and on previous guidance and blogs issued on specific issues relating to AI (for example, on explaining decisions on AI, trade-offs, and bias and discrimination, all covered in Covington blogs).
Continue Reading UK ICO publishes guidance on Artificial Intelligence