On 17 December 2020, the Council of Europe’s* Ad hoc Committee on Artificial Intelligence (CAHAI) published a Feasibility Study (the “Study”) on Artificial Intelligence (AI) legal standards. The Study examines the feasibility and potential elements of a legal framework for the development and deployment of AI, based on the Council of Europe’s human rights standards. Its main conclusion is that current regulations do not suffice in creating the necessary legal certainty, trust, and level playing field needed to guide the development of AI. Accordingly, it proposes the development of a new legal framework for AI consisting of both binding and non-binding Council of Europe instruments.

The Study recognizes the major opportunities of AI systems to promote societal development and human rights. Alongside these opportunities, it also identifies the risks that AI could endanger rights protected by the European Convention on Human Rights (ECHR), as well as democracy and the rule of law. Examples of the risks to human rights cited in the Study include AI systems that undermine the right to equality and non-discrimination by perpetuating biases and stereotypes (e.g., in employment), and AI-driven surveillance and tracking applications that jeopardise individuals’ right to freedom of assembly and expression.

Continue Reading AI Update: The Council of Europe Publishes Feasibility Study on Developing a Legal Instrument for Ethical AI

On February 4, 2020, the United Kingdom’s Centre for Data Ethics and Innovation (“DEI”) published its final report on “online targeting” (the “Report”), examining practices used to monitor a person’s online behaviour and subsequently customize their experience. In October 2018, the UK government appointed the DEI, an expert committee that advises the UK government on how to maximize the benefits of new technologies, to explore how data is used in shaping peoples’ online experiences. The Report sets out its findings and recommendations. Continue Reading Centre for Data Ethics and Innovation publishes final report on “online targeting”

On 19 September 2019, the European Parliamentary Research Service (“EPRS”)—the European Parliament’s in-house research service—released a briefing paper that summarizes the current status of the EU’s approach to developing a regulatory framework for ethical AI.  Although not a policymaking body, the EPRS can provide useful insights into the direction of EU policy on an issue.  The paper summarises recent calls in the EU for adopting legally binding instruments to regulate AI, in particular to set common rules on AI transparency, set common requirements for fundamental rights impact assessments, and provide an adequate legal framework for facial recognition technology.

The briefing paper follows publication of the European Commission’s high-level expert group’s Ethics Guidelines for Trustworthy Artificial Intelligence (the “Guidelines”), and the announcement by incoming Commission President Ursula von der Leyen that she will put forward legislative proposals for a “coordinated European approach to the human and ethical implications of AI” within her first 100 days in office.

Continue Reading European Parliamentary Research Service issues a briefing paper on implementing EU’s ethical guidelines on AI

On April 8, 2019, the EU High-Level Expert Group on Artificial Intelligence (the “AI HLEG”) published its “Ethics Guidelines for Trustworthy AI” (the “guidance”).  This follows a stakeholder consultation on its draft guidelines published in December 2018 (the “draft guidance”) (see our previous blog post for more information on the draft guidance).  The guidance retains many of the same core elements of the draft guidance, but provides a more streamlined conceptual framework and elaborates further on some of the more nuanced aspects, such as on interaction with existing legislation and reconciling the tension between competing ethical requirements.

According to the European Commission’s Communication accompanying the guidance, the Commission will launch a piloting phase starting in June 2019 to collect more detailed feedback from stakeholders on how the guidance can be implemented, with a focus in particular on the assessment list set out in Chapter III.  The Commission plans to evaluate the workability and feasibility of the guidance by the end of 2019, and the AI HLEG will review and update the guidance in early 2020 based on the evaluation of feedback received during the piloting phase. Continue Reading AI Update: EU High-Level Working Group Publishes Ethics Guidelines for Trustworthy AI

On February 27th, Reps. Brenda Lawrence (D-Mich.) and Ro Khanna (D-Calif.) introduced a resolution emphasizing the need to ethically develop artificial intelligence (“AI”). H. RES. 153, titled “Supporting the development of guidelines for ethical development of artificial intelligence,” calls on the government to work with stakeholders to ensure that AI is developed in a “safe, responsible, and democratic” fashion. The resolution has nine Democratic sponsors and was referred to the House Committee on Science, Space, and Technology.

Continue Reading AI Update: U.S. House Resolution on AI Ethical Development Introduced

On 18 December 2018, the EU High-Level Expert Group on Artificial Intelligence (the “AI HLEG”) published new draft guidance on “AI Ethics” (the “guidance”).  The AI HLEG is a European Commission-backed working group made up of representatives from industry, academia and NGOs, and was formed as part of the Commission’s ongoing work to develop EU policy responses to the development, challenges and new opportunities posed by AI technologies.  Stakeholders are invited to comment on the draft through the European AI Alliance before it is finalized in March 2019.

The guidance recognizes the potential benefits of AI technologies for Europe, but also stresses that AI must be developed and implemented with a “human-centric approach” that results in “Trustworthy AI”. The guidance then explains in detail the concept of “Trustworthy AI” and the issues stakeholders should navigate in order to achieve it.  A more detailed summary of the guidance is set out below.

This guidance is not binding, but it is likely to influence EU policymakers as they consider whether and how to legislate in the AI space going forwards. AI HLEG also envisages that the final version of the guidance in March 2019 will include a mechanism to allow stakeholders to voluntarily endorse its principles.  The guidance also states that the AI HLEG will consider making legislative recommendations in its separate deliverable on “Policy & Investment Recommendations,” due May 2019.

Continue Reading EU Working Group Publishes Draft Guidance on AI Ethics

On 20 November 2018, the UK government published its response (the “Response”) to the June 2018 consultation (the “Consultation”) regarding the proposed new Centre for Data Ethics and Innovation (“DEI”). First announced in the UK Chancellor’s Autumn 2017 Budget, the DEI will identify measures needed to strengthen the way data and AI are used and regulated, advising on addressing potential gaps in regulation and outlining best practices in the area. The DEI is described as being the first of its kind globally, and represents an opportunity for the UK to take the lead the debate on how data is regulated. Continue Reading IoT Update: The UK Government’s Response to Centre for Data Ethics and Innovation Consultation

The field of artificial intelligence (“AI”) is at a tipping point. Governments and industries are under increasing pressure to forecast and guide the evolution of a technology that promises to transform our economies and societies. In this series, our lawyers and advisors provide an overview of the policy approaches and regulatory frameworks for AI in jurisdictions around the world. Given the rapid pace of technological and policy developments in this area, the articles in this series should be viewed as snapshots in time, reflecting the current policy environment and priorities in each jurisdiction.

The following article examines the state of play in AI policy and regulation in China. The previous articles in this series covered the European Union and the United States.

Continue Reading Spotlight Series on Global AI Policy — Part III: China’s Policy Approach to Artificial Intelligence

Earlier today, the White House issued a Fact Sheet summarizing its Executive Order on a comprehensive strategy to support the development of safe and secure artificial intelligence (“AI”).  The Executive Order follows a number of actions by the Biden Administration on AI, including its Blueprint for an AI Bill of Rights and voluntary commitments from certain developers of AI systems.  According to the Administration, the Executive Order establishes new AI safety and security standards, protects privacy, advances equity and civil rights, protects workers, consumers, and patients, promotes innovation and competition, and advances American leadership.  This blog post summarizes these key components.

Continue Reading Biden Administration Announces Artificial Intelligence Executive Order

On May 23, 2023, the White House announced that it took the following steps to further advance responsible Artificial Intelligence (“AI”) practices in the U.S.:

  • the Office of Science and Technology Policy (“OSTP”) released an updated strategic plan that focuses on federal investments in AI research and development (“R&D”);
  • OSTP issued a new request for information (“RFI”) on critical AI issues; and
  • the Department of Education issued a new report on risks and opportunities related to AI in education.
Continue Reading White House Announces New Efforts to Advance Responsible AI Practices