On April 8, 2019, the EU High-Level Expert Group on Artificial Intelligence (the “AI HLEG”) published its “Ethics Guidelines for Trustworthy AI” (the “guidance”).  This follows a stakeholder consultation on its draft guidelines published in December 2018 (the “draft guidance”) (see our previous blog post for more information on the draft guidance).  The guidance retains many of the same core elements of the draft guidance, but provides a more streamlined conceptual framework and elaborates further on some of the more nuanced aspects, such as on interaction with existing legislation and reconciling the tension between competing ethical requirements.

According to the European Commission’s Communication accompanying the guidance, the Commission will launch a piloting phase starting in June 2019 to collect more detailed feedback from stakeholders on how the guidance can be implemented, with a focus in particular on the assessment list set out in Chapter III.  The Commission plans to evaluate the workability and feasibility of the guidance by the end of 2019, and the AI HLEG will review and update the guidance in early 2020 based on the evaluation of feedback received during the piloting phase.

The guidance is not binding, but stakeholders can voluntarily use the guidance as a way to operationalise their commitment to achieving “Trustworthy AI,” which is the AI HLEG’s term for the gold standard of an ethical approach to AI.  According to the AI HLEG, Trustworthy AI consists of the following three components:

  1. Lawful. It should comply with all applicable laws and regulations;
  2. Ethical. It should comply with ethical principles and values; and
  3. Robust. It should be robust from both a technical and social perspective.

Each component is considered “necessary but not sufficient for the achievement of Trustworthy AI,” and as such all three should “work in harmony and overlap.”  The introduction of “lawfulness” as a component of Trustworthy AI is one of the key changes in the final version of the guidance as compared to the draft.  The guidance recognizes that AI systems do not operate in a legal vacuum, and that AI systems are subject to a number of existing laws, including (but not limited to) the General Data Protection Regulation (GDPR), the Product Liability Directive, the Regulation on the Free Flow of Non-Personal Data, anti-discrimination legislation, consumer law, and sector-specific laws (such as Medical Devices Regulation in the healthcare sector).  The guidance confirms that organizations developing, deploying and using AI systems should comply with such existing laws, to the extent that they apply.  The guidance does not discuss the legal obligations that apply to AI systems in further detail, but focuses on the latter two components – that AI systems should be “ethical” and “robust”.

Chapter I of the guidance outlines the four ethical principles that should apply to AI systems, which are: (1) respect for human autonomy; (2) prevention of harm; (3) fairness; and (4) explicability.  The guidance frames these as “ethical imperatives” that AI practitioners should always try to adhere to.  Yet, the guidance recognizes that tensions may arise between these principles, for which there is no fixed solution.  For instance, there may be a situation where prevention of harm (such as terrorism) may conflict with respect for human autonomy (such as privacy).  As such, the guidance notes that while the four ethical principles offer some guidance towards solutions, they remain abstract prescriptions, and AI practitioners should approach ethical dilemmas “via reasoned, evidence-based reflection rather than intuition or random discretion.”

Chapter II of the guidance sets out the following seven key requirements to achieve Trustworthy AI that apply in the life-cycle of the development, deployment and use of AI systems:

  1. Human agency and oversight. Including fundamental rights, human agency and human oversight.
  2. Technical robustness and safety. Including resilience to attack and security, fall back plan and general safety, accuracy, reliability and reproducibility.
  3. Privacy and data governance. Including respect for privacy, quality and integrity of data, and access to data.
  4. Transparency. Including traceability, explainability and communication.
  5. Diversity, non-discrimination and fairness. Including the avoidance of unfair bias, accessibility and universal design, and stakeholder participation.
  6. Societal and environmental wellbeing. Including sustainability and environmental friendliness, social impact, society and democracy.
  7. Accountability. Including auditability, minimization and reporting of negative impact, trade-offs and redress.

Chapter II also recommends both technical and non-technical measures to achieve Trustworthy AI.  Technical measures include architectures for Trustworthy AI, ethics and rule of law by design, explanation methods, testing and validating, and quality of service indicators.  Non-technical measures include regulation, codes of conduct, standardization, certification, accountability via governance frameworks, education and awareness to foster an ethical mindset, stakeholder participation and social dialogue and diversity and inclusive design teams.

On regulation as a non-technical measure to achieve Trustworthy AI, the guidance again confirms that existing legislation already supports the trustworthiness of AI systems.  On the face of this guidance, it is not apparent that the AI HLEG supports specific further regulation of AI at this stage, but the guidance notes that the AI HLEG will soon issue “AI Policy and Investment Recommendations,” which will address whether existing regulation may need to be revised, adapted or introduced in this space.

Chapter III of the guidance provides a Trustworthy AI assessment list (the “assessment list”), which acts as a checklist for stakeholders to ensure that AI systems and applications meet the ethical principles and Trustworthy AI requirements set out above.  A notable addition to this section includes guidance on the roles of individuals within an organisation for implementing the assessment list (including the Management and Board, Compliance/Legal/Corporate responsibility departments, Product and Service development teams, Quality Assurance, HR, Procurement, and developers and project managers in their day-to-day roles).  The guidance recommends engaging individuals at all levels of the organization, including those from the operational level all the way up to management.

The guidance includes additional instructions for using the assessment list, which recommends taking a proportionate approach and paying close attention to both areas of concern and questions that cannot be (easily) answered.   It gives an example of an organization that is unable to ensure diversity when developing and testing the AI system, due to the lack of diversity in the development team.  In this situation, the guidance recommends involving other stakeholders either inside or outside the organization to satisfy this requirement.

The guidance stresses that the assessment list will need to be adapted to the particular application of an AI system at issue.  It notes that “different situations raise different challenges,” for example, an AI system involving music recommendations will raise different ethical considerations to an AI system that proposes critical medical treatments.  Greater importance is given to AI systems that directly or indirectly affect individuals.  Further to this, the guidance suggests that additional sectoral guidance may be necessary to deal with the different ethical challenges raised in different sectors.

The final section of Chapter III gives examples of opportunities and critical concerns raised by AI, as follows:

  • Examples of opportunities: Using AI to tackle climate action and sustainable infrastructure, improve health and well-being, improving the quality of education, and achieving digital transformation;
  • Examples of critical concerns: Using AI to identify and track individuals (using for instance, facial recognition technology), covert AI systems, AI-enabled citizen scoring, and lethal autonomous weapons.

In the areas of “critical concern”, the guidance calls for a proportionate approach that takes into account the fundamental human rights of the individuals concerned.  When organizations use AI systems that involve these critical concerns, they will need to undergo a careful ethical (as well as legal) assessment.

Next Steps

As noted above, the guidance will now enter a “piloting phase” where interested stakeholders can provide feedback on implementing the guidance and the assessment list in real projects.  Based on this feedback, the AI HLEG will update the guidance in early 2020.

In the meantime, according to the Communication the Commission will work towards a set of international AI ethics guidelines that brings the European approach to the global stage.  The Commission intends to cooperate with “like-minded partners” by finding convergence with other countries’ AI ethics guidelines and building an international group for broader discussion.  It will also continue to “play an active role in international discussions and initiatives,” such as contributing to the G7 and G20 summits on this issue.

Finally, the Commission announced in its Communication the following plans, to be implemented by the third quarter of 2019:

  • To launch networks of AI research excellence centers;
  • To launch networks of digital innovation hubs (focusing on AI in manufacturing and big data);
  • To start discussions with Member States and stakeholders to “develop and implement a model for data sharing and making best use of common data spaces”;
  • To continue work on its draft report identifying the challenges with the use of AI in the product liability space; and
  • For the European High-Performance Computing Joint Undertaking to develop next generation supercomputers which the Commission considers “essential for processing data and training AI.”

These plans further build on the Commission’s broader European AI Strategy, aimed at boosting Europe’s competitiveness in the field of AI.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Lisa Peets Lisa Peets

Lisa Peets leads the Technology Regulatory and Policy practice in the London office and is a member of the firm’s Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory counsel and legislative advocacy. In this context, she…

Lisa Peets leads the Technology Regulatory and Policy practice in the London office and is a member of the firm’s Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory counsel and legislative advocacy. In this context, she has worked closely with leading multinationals in a number of sectors, including many of the world’s best-known technology companies.

Lisa counsels clients on a range of EU law issues, including data protection and related regimes, copyright, e-commerce and consumer protection, and the rapidly expanding universe of EU rules applicable to existing and emerging technologies. Lisa also routinely advises clients in and outside of the technology sector on trade related matters, including EU trade controls rules.

According to the latest edition of Chambers UK (2022), “Lisa is able to make an incredibly quick legal assessment whereby she perfectly distils the essential matters from the less relevant elements.” “Lisa has subject matter expertise but is also able to think like a generalist and prioritise. She brings a strategic lens to matters.”

Photo of Marty Hansen Marty Hansen

Martin Hansen has represented some of the world’s leading information technology, telecommunications, and pharmaceutical companies on a broad range of cutting edge international trade, intellectual property, and competition issues. Martin has extensive experience in advising clients on matters arising under the World Trade…

Martin Hansen has represented some of the world’s leading information technology, telecommunications, and pharmaceutical companies on a broad range of cutting edge international trade, intellectual property, and competition issues. Martin has extensive experience in advising clients on matters arising under the World Trade Organization agreements, treaties administered by the World Intellectual Property Organization, bilateral and regional free trade agreements, and other trade agreements.

Drawing on ten years of experience in Covington’s London and DC offices his practice focuses on helping innovative companies solve challenges on intellectual property and trade matters before U.S. courts, the U.S. government, and foreign governments and tribunals. Martin also represents software companies and a leading IT trade association on electronic commerce, Internet security, and online liability issues.

Photo of Sam Jungyun Choi Sam Jungyun Choi

Sam Jungyun Choi is an associate in the technology regulatory group in the London office. Her practice focuses on European data protection law and new policies and legislation relating to innovative technologies such as artificial intelligence, online platforms, digital health products and autonomous…

Sam Jungyun Choi is an associate in the technology regulatory group in the London office. Her practice focuses on European data protection law and new policies and legislation relating to innovative technologies such as artificial intelligence, online platforms, digital health products and autonomous vehicles. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.

Sam advises leading technology, software and life sciences companies on a wide range of matters relating to data protection and cybersecurity issues. Her work in this area has involved advising global companies on compliance with European data protection legislation, such as the General Data Protection Regulation (GDPR), the UK Data Protection Act, the ePrivacy Directive, and related EU and global legislation. She also advises on a variety of policy developments in Europe, including providing strategic advice on EU and national initiatives relating to artificial intelligence, data sharing, digital health, and online platforms.