On 18 December 2018, the EU High-Level Expert Group on Artificial Intelligence (the “AI HLEG”) published new draft guidance on “AI Ethics” (the “guidance”).  The AI HLEG is a European Commission-backed working group made up of representatives from industry, academia and NGOs, and was formed as part of the Commission’s ongoing work to develop EU policy responses to the development, challenges and new opportunities posed by AI technologies.  Stakeholders are invited to comment on the draft through the European AI Alliance before it is finalized in March 2019.

The guidance recognizes the potential benefits of AI technologies for Europe, but also stresses that AI must be developed and implemented with a “human-centric approach” that results in “Trustworthy AI”. The guidance then explains in detail the concept of “Trustworthy AI” and the issues stakeholders should navigate in order to achieve it.  A more detailed summary of the guidance is set out below.

This guidance is not binding, but it is likely to influence EU policymakers as they consider whether and how to legislate in the AI space going forwards. AI HLEG also envisages that the final version of the guidance in March 2019 will include a mechanism to allow stakeholders to voluntarily endorse its principles.  The guidance also states that the AI HLEG will consider making legislative recommendations in its separate deliverable on “Policy & Investment Recommendations,” due May 2019.


Summary of the Guidance

Defining “Trustworthy AI”

The draft guidance offers a framework for “Trustworthy AI”, which is made up of two parts:

1. AI must be developed, deployed and used with an “ethical purpose” that respects fundamental rights, societal values and ethical principles

To ensure that the purpose to which AI is put is “ethical,” the development and use of AI technologies should respect the European fundamental rights set out in the EU Treaties and the Charter of Fundamental Rights of the EU.  The AI HLEG enumerates five “ethical principles” that apply specifically to AI:

  • Beneficence. AI technologies should be designed to “do good,” either for individuals or for “collective wellbeing.”  The guidance does not rule out that this can include doing good through “generating prosperity, value creation and wealth maximization,” but also encourages the use of AI to tackle issues like the protection of democratic process and rule of law, the provision of common goods and services at low cost and high quality, and the achievement of the UN Sustainable Development Goals.
  • Non-maleficence.  AI should not harm humans (either physically, psychologically, financially or socially); nor should AI “threaten the democratic process, freedom of expression, or freedoms of identity.”  Humans should also remain able to refuse “AI services.”  To prevent such harm, data used to train AI algorithms must be collected and used in ways that avoid “discrimination, manipulation, or negative profiling.”  Societies should also be protected from “ideological polarization and algorithmic determinism.”  AI stakeholders should also take greater efforts to protect vulnerable groups (such as children or minorities).
  • Autonomy.  Humans must retain rights of self-determination, including – for consumers and users of AI systems – rights to decide whether or not to be subject to direct or indirect AI decision-making, rights to know whether they are interacting with AI-based systems, and rights to opt-out or withdraw from those systems.  The guidance is also clear this means individuals should have rights, both individually and collectively, to “decide on how AI systems operate” in an employment context.  Systems must also be in place to ensure the use of AI is accountable and that humans remain responsible for decisions of AI.
  • Justice.  AI “must be fair.”  In particular, AI should be developed to prevent bias or discrimination, and ensure that “the positives and negatives resulting from AI” are evenly distributed, without burdening vulnerable groups with concentrated negative outcomes.  In addition, to meet this principle, AI systems must make available “effective redress” if harm occurs, and developers of those systems must be held to “high standards of accountability.”
  • Explicability.  AI technologies must be “transparent,” both in terms of “technological transparency” and also in terms of “business model transparency.” “Technological transparency” means that AI systems must be auditable, comprehensible and intelligible for individuals with varying levels of comprehension and expertise.  “Business model transparency” means that individuals are informed of the intention of those developing or implementing the AI system.

The AI HLEG notes particular challenges in operationalizing these principles in connection with vulnerable groups (such as children, minorities or immigrants), and where significant “asymmetries of power or information” exist between stakeholders (i.e., between an employer and employee).  In addition, the HLEG identifies certain AI use cases that raise “critical concerns” in relation to these principles (although the AI HLEG apparently could not reach full agreement on the extent of those concerns).  These use cases are:

  • Using AI to identify humans without their consent, including using biometric identification technologies such as facial recognition;
  • Using “covert AI systems,” where the fact a human is interacting with an AI is not apparent or disclosed;
  • Using AI to “score” citizens for certain normative values (such as, e.g., an assessment of each person’s “moral personality” or “ethical integrity”); and
  • Using AI to create lethal autonomous weapon systems.

 

2. AI should be technically robust and reliable

In addition to respecting the principles above, “Trustworthy AI” must also be technically robust and reliable.  In essence, this means that AI should be developed through technical methods that ensure that ethically designed AI actually achieves its objectives.  The AI HLEG suggests both “technical methods” and “non-technical methods” for ensuring the implementation of Trustworthy AI.

  • Technical methods include, for example, the importance of testing AI; instituting methods to trace and audit AI decision-making; and taking steps to ensure that AI decisions can be explained.
  • Non-technical methods include the use of standards such as ISO standards; codes of conduct; appropriate training and education; stakeholder dialogues; and diversity in the teams that develop AI.

 

Checklist of Issues to Consider when Assessing “Trustworthy AI”

In addition to setting out the concept of “Trustworthy AI,” the AI HLEG also provides a checklist that stakeholders can use to help assess whether their planned or existing use of AI is “Trustworthy.”  The checklist sets out a range of questions covering topics including accountability; data governance; fairness in design; governing AI decision-making and human oversight; non-discrimination; respect for privacy; respect for human autonomy; robustness against attack or manipulation of the AI; ensuring accuracy; fall-black planning in case of system failure; safety; and transparency issues, among others. The AI HLEG plans to include tailored checklists for assessing AI systems used for healthcare, autonomous driving, insurance premiums, and profiling and law enforcement in the final version of the guidance, and asks stakeholders for their views on how to develop these checklists.