The UK’s Information Commissioner’s Office (“ICO”) has issued and is consulting on draft guidance about explaining decisions made by AI.  The ICO prepared the guidance with The Alan Turing Institute, which is the UK’s national institute for data science and artificial intelligence.  Among other things, the guidance sets out key principles to follow and steps to take when explaining AI-assisted decisions — including in relation to different types of AI algorithms — and the policies and procedures that organizations should consider putting in place.

The draft guidance builds upon the ICO’s previous work in this area, including its AI Auditing Framework, June 2019 Project ExplAIN interim report, and September 2017 paper ‘Big data, artificial intelligence, machine learning and data protection’.  (Previous blog posts that track this issue are available here.)  Elements of the new draft guidance touch on points that go beyond narrow GDPR requirements, such as AI ethics (see, in particular, the recommendation to provide explanations of the fairness or societal impacts of AI systems).  Other sections of the guidance are quite technical; for example, the ICO provides its own analysis of the possible uses and interpretability of eleven specific types of AI algorithms.

Organizations that develop, test or deploy AI decision-making systems should review the draft guidance and consider responding to the consultation. The consultation is open until January 24, 2020.  A final version is expected to be published later next year.

The draft guidance focuses on how organizations that develop, test or deploy AI systems should explain automated decisions about individuals that produce a legal or other significant effect  (within the meaning of GDPR Art. 22).  Although this scope is quite specific, the guidance may influence broader discussions on AI transparency and explainability in the UK and at the EU-level.

The draft guidance is presented in three separate parts:

Part 1. The basics of explaining AI

This first part notes that the GDPR requires organizations to explain AI-assisted decisions to individuals where such decisions are made without human involvement and produce legal or similarly significant effects on individuals (citing GDPR Articles 22, 13, 14 and 15).

The ICO sets out four key principles — guided by the GDPR — in relation to explaining AI decision-making systems.  For each principle, the ICO identifies different types of explanations that should be provided to individuals, as set out below.

  1. Be transparent: Organizations should make it obvious that AI is being used to make decisions and explain the decisions to individuals in a meaningful way. This means providing:
  • an explanation of the reasons that led to a decision, delivered in an accessible and non-technical way (a rationale explanation); and
  • an explanation about the data that has been used and how it has been used to (i) come to a particular decision and (ii) train and test the AI model (a data explanation).
  1. Be accountable: Organizations should ensure appropriate oversight of AI decision systems, and be answerable to others. This means providing:
  • an explanation of who is involved and responsible for developing, managing and implementing an AI system within the relevant organization, and who to contact for a human review of a decision (a responsibility explanation).
  1. Consider context: The guidance recognizes that there is no one-size-fits-all approach to explain AI-assisted decisions. When considering how to explain decisions, organizations should take into account the sector, the particular use case and the impact of the AI system on the individual.
  2. Reflect on impacts: The ICO encourages organizations to ask and answer questions about ethical purposes and objectives at the initial stages of AI projects. Organizations should explain the steps that they take during the design and implementation of an AI system to:
  • mitigate risks of unfair bias and discrimination, and to ensure that individuals are being treated equitably (a fairness explanation); and
  • maximise the accuracy, reliability, security and robustness of its decisions and behaviours (a safety and performance explanation).

Organizations should also explain the impact that the use of an AI system and its decisions has or may have on an individual, and on wider society (an impact explanation).

Part 2. Explaining AI in practice

The second part of the draft guidance sets out practical steps that organizations can take to explain AI-assisted decisions and provide explanations to individuals.  The ICO stresses that different approaches may be appropriate for different applications of AI and depending on the context in which they are used.

To help with this exercise, the ICO provides checklists of questions and technical guidance on specific AI models that organizations should take into account when developing different types of explanations.

The ICO specifically calls out “black box” or opaque AI systems, which it understands to be any AI system whose inner workings and rationale are opaque or inaccessible to human understanding (e.g., neural networks, ensemble methods, and support vector machines). The ICO suggests that these should only be used (i) if organizations have thoroughly considered their potential impacts and risks in advance; and (ii) if supplemental tools to interpret such systems are available to provide affected individuals with meaningful information.

The ICO also provides its own analysis of the possible uses and interpretability of eleven different types of AI algorithms (e.g., linear regression, decision tree, support vector machines, artificial neural net, etc.)

This part of the guidance provides examples of how organizations can select the appropriate types of explanations to prioritize depending on context (for AI-assisted recruitment and AI-assisted medical diagnosis).  Annex I to this Part 2 of the draft guidance also contains a step-by-step example of building an explanation of an AI-assisted cancer diagnosis tool.

Part 3. What explaining AI means for your organization

In the final and third part of the draft guidance, the ICO explains the various roles, policies, procedures and documentation that organizations could put into place to ensure that an organization is in a position to provide meaningful explanations to individuals.  The draft guidance notes that anyone involved in the decision-making pipeline has a role to play in providing an explanation of an AI system.  The ICO recommends that organizations should create or update existing policies and procedures to codify the roles and responsibilities for explaining AI systems, including in relation to data collection, model selection, explanation extraction/delivery and impact assessment, amongst others.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Mark Young Mark Young

Mark Young, an experienced tech regulatory lawyer, advises major global companies on their most challenging data privacy compliance matters and investigations.

Mark also leads on EMEA cybersecurity matters at the firm. He advises on evolving cyber-related regulations, and helps clients respond to…

Mark Young, an experienced tech regulatory lawyer, advises major global companies on their most challenging data privacy compliance matters and investigations.

Mark also leads on EMEA cybersecurity matters at the firm. He advises on evolving cyber-related regulations, and helps clients respond to incidents, including personal data breaches, IP and trade secret theft, ransomware, insider threats, and state-sponsored attacks.

Mark has been recognized in Chambers UK for several years as “a trusted adviser – practical, results-oriented and an expert in the field;” “fast, thorough and responsive;” “extremely pragmatic in advice on risk;” and having “great insight into the regulators.”

Drawing on over 15 years of experience advising global companies on a variety of tech regulatory matters, Mark specializes in:

  • Advising on potential exposure under GDPR and international data privacy laws in relation to innovative products and services that involve cutting-edge technology (e.g., AI, biometric data, Internet-enabled devices, etc.).
  • Providing practical guidance on novel uses of personal data, responding to individuals exercising rights, and data transfers, including advising on Binding Corporate Rules (BCRs) and compliance challenges following Brexit and Schrems II.
    Helping clients respond to investigations by data protection regulators in the UK, EU and globally, and advising on potential follow-on litigation risks.
  • GDPR and international data privacy compliance for life sciences companies in relation to:
    clinical trials and pharmacovigilance;

    • digital health products and services; and
    • marketing programs.
    • International conflict of law issues relating to white collar investigations and data privacy compliance.
  • Cybersecurity issues, including:
    • best practices to protect business-critical information and comply with national and sector-specific regulation;
      preparing for and responding to cyber-based attacks and internal threats to networks and information, including training for board members;
    • supervising technical investigations; advising on PR, engagement with law enforcement and government agencies, notification obligations and other legal risks; and representing clients before regulators around the world; and
    • advising on emerging regulations, including during the legislative process.
  • Advising clients on risks and potential liabilities in relation to corporate transactions, especially involving companies that process significant volumes of personal data (e.g., in the adtech, digital identity/anti-fraud, and social network sectors.)
  • Providing strategic advice and advocacy on a range of EU technology law reform issues including data privacy, cybersecurity, ecommerce, eID and trust services, and software-related proposals.
  • Representing clients in connection with references to the Court of Justice of the EU.
Photo of Sam Jungyun Choi Sam Jungyun Choi

Sam Jungyun Choi is an associate in the technology regulatory group in the London office. Her practice focuses on European data protection law and new policies and legislation relating to innovative technologies such as artificial intelligence, online platforms, digital health products and autonomous…

Sam Jungyun Choi is an associate in the technology regulatory group in the London office. Her practice focuses on European data protection law and new policies and legislation relating to innovative technologies such as artificial intelligence, online platforms, digital health products and autonomous vehicles. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.

Sam advises leading technology, software and life sciences companies on a wide range of matters relating to data protection and cybersecurity issues. Her work in this area has involved advising global companies on compliance with European data protection legislation, such as the General Data Protection Regulation (GDPR), the UK Data Protection Act, the ePrivacy Directive, and related EU and global legislation. She also advises on a variety of policy developments in Europe, including providing strategic advice on EU and national initiatives relating to artificial intelligence, data sharing, digital health, and online platforms.