Artificial Intelligence (AI)

Earlier this week, Members of the European Parliament (MEPs) cast their votes in favor of the much-anticipated AI Act. With 523 votes in favor, 46 votes against, and 49 abstentions, the vote is a culmination of an effort that began in April 2021, when the EU Commission first published its proposal for the Act.

Here’s what lies ahead:Continue Reading EU Parliament Adopts AI Act

On February 20, Speaker Mike Johnson (R-LA) and Democratic Leader Hakeem Jeffries (D-NY) announced a new Artificial Intelligence (AI) task force in the House of Representatives, with the goal of developing principles and policies to promote U.S. leadership and security with respect to AI.  Rep. Jay Olbernolte (R-CA) will chair the task force, joined by Rep. Ted Lieu (D-CA) as co-chair.  Several other senior members of the California delegation, including Rep. Darrell Issa (R-CA) and retiring Rep. Anna Eshoo (D-CA), will participate in the effort as well.Continue Reading New Bipartisan House Task Force May Signal Legislative Momentum on Artificial Intelligence

On January 24, 2024, the European Commission (“Commission”) announced that, following the political agreement reached in December 2023 on the EU AI Act (“AI Act”) (see our previous blog here), the Commission intends to proceed with a package of measures (“AI Innovation Strategy”) to support AI startups and small and medium-size enterprises (“SMEs”) in the EU.

Alongside these measures, the Commission also announced the creation of the European AI Office (“AI Office”), which is due to begin formal operations on February 21, 2024.

This blog post provides a high-level summary of these two announcements, in addition to some takeaways to bear in mind as we draw closer to the adoption of the AI Act.Continue Reading European Commission Announces New Package of AI Measures

The field of artificial intelligence (“AI”) is at a tipping point. Governments and industries are under increasing pressure to forecast and guide the evolution of a technology that promises to transform our economies and societies. In this series, our lawyers and advisors provide an overview of the policy approaches and regulatory frameworks for AI in jurisdictions around the world. Given the rapid pace of technological and policy developments in this area, the articles in this series should be viewed as snapshots in time, reflecting the current policy environment and priorities in each jurisdiction.

The following article examines the state of play in AI policy and regulation in China. The previous articles in this series covered the European Union and the United States.Continue Reading Spotlight Series on Global AI Policy — Part III: China’s Policy Approach to Artificial Intelligence

On 15 January 2024, the UK’s Information Commissioner’s Office (“ICO”) announced the launch of a consultation series (“Consultation”) on how elements of data protection law apply to the development and use of generative AI (“GenAI”). For the purposes of the Consultation, GenAI refers to “AI models that can create new content e.g., text, computer code, audio, music, images, and videos”.

As part of the Consultation, the ICO will publish a series of chapters over the coming months outlining their thinking on how the UK GDPR and Part 2 of the Data Protection Act 2018 apply to the development and use of GenAI. The first chapter, published in tandem with the Consultation’s announcement, covers the lawful basis, under UK data protection law, for web scraping of personal data to train GenAI models. Interested stakeholders are invited to provide feedback to the ICO by 1 March 2024.Continue Reading ICO Launches Consultation Series on Generative AI

Recent proposals to amend the UK’s national security investment screening regime mean that investors may in future be required to make mandatory, suspensory, pre-closing filings to the UK Government when seeking to invest in a broader range of companies developing generative artificial intelligence (AI). The UK Government launched a Call for Evidence in November 2023 seeking input from stakeholders on a number of potential amendments to the operation of the National Security and Investment Act (NSIA) regime, including whether generative AI, which the Government states is not currently directly in scope of the AI filing trigger, should expressly fall within the mandatory filing regime. The Call for Evidence closes on 15 January 2024.

This blog sets out how the NSIA regime operates, how investments in companies developing AI are currently caught by the NSIA, and the Government’s proposals to refine the scope of AI activities captured by the regime, including potentially directly encompassing generative AI.Continue Reading UK Government Consults on Amending Mandatory Filing Obligations for AI Acquisitions

On December 9, 2023, the European Parliament, the Council of the European Union and the European Commission reached a political agreement on the EU Artificial Intelligence Act (“AI Act”) (see here for the Parliament’s press statement, here for the Council’s statement, and here for the Commission’s statement). Following three days of intense negotiations, during the fifth “trilogue” discussions amongst the EU institutions, negotiators reached an agreement on key topics, including: (i) the scope of the AI Act; (ii) AI systems classified as “high-risk” under the Act; and (iii) law enforcement exemptions.

As described in our previous blog posts on the AI Act (see here, here, and here), the Act will establish a comprehensive and horizontal law governing the development, import, deployment and use of AI systems in the EU. In this blog post, we provide a high-level summary of the main points EU legislators appear to have agreed upon, based on the press releases linked above and a further Q&A published by the Commission. However, the text of the political agreement is not yet publicly available. Further, although a political agreement has been reached, a number of details remain to be finalized in follow-up technical working meetings over the coming weeks.Continue Reading EU Artificial Intelligence Act: Nearing the Finish Line

Recently, a bipartisan group of U.S. senators introduced new legislation to address transparency and accountability for artificial intelligence (AI) systems, including those deployed for certain “critical impact” use cases. While many other targeted, bipartisan AI bills have been introduced in both chambers of Congress, this bill appears to be one of the first to propose specific legislative text for broadly regulating AI testing and use across industries.Continue Reading Bipartisan group of Senators introduce new AI transparency legislation

On October 30, 2023, days ahead of government leaders convening in the UK for an international AI Safety Summit, the White House issued an Executive Order (“EO”) outlining an expansive strategy to support the development and deployment of safe and secure AI technologies (for further details on the EO, see our blog here). As readers will be aware, the European Commission released its proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the EU “AI Act”) in 2021 (see our blog here). EU lawmakers are currently negotiating changes to the Commission text, with hopes of finalizing the text by the end of this year, although many of its obligations would only begin to apply to regulated entities in 2026 or later.

The EO and the AI Act stand as two important developments shaping the future of global AI governance and regulation. This blog post discusses key similarities and differences between the two.Continue Reading From Washington to Brussels: A Comparative Look at the Biden Administration’s Executive Order and the EU’s AI Act

On 13 October 2023, members of the G7 released a set of draft guiding principles (“Principles”) for organisations developing advanced AI systems, including generative AI and foundational models.

In parallel, the European Commission launched a stakeholder survey (“Survey”) on the Principles, inviting any interested parties to comment by 20 October 2023.  After the Survey is complete, G7 members intend to compile a voluntary code of conduct that will provide guidance for AI developers.  The Principles and voluntary code of conduct will complement the legally binding rules that EU co-legislators are currently finalizing under the EU AI Act (for further details on the AI Act, see our blog post here).

The Principles build on the existing OECD AI principles published in May 2019 (see our blog post here) in response to recent developments in advanced AI systems.  They would apply to all participants in the AI value chain, including those responsible for the design, development, deployment, and use of AI systems.Continue Reading G7 Countries Publish Draft Guiding Principles for Advanced AI Development