The National Institute of Standards and Technology (“NIST”) is seeking comments on the first draft of the Four Principles of Explainable Artificial Intelligence (NISTIR 8312), a white paper that seeks to define the principles that capture the fundamental properties of explainable AI systems.  NIST will be accepting comments until October 15, 2020.

In February 2019, the Executive Order on Maintaining American Leadership in Artificial Intelligence directed NIST to develop a plan that would, among other objectives, “ensure that technical standards minimize vulnerability to attacks from malicious actors and reflect Federal priorities for innovation, public trust, and public confidence in systems that use AI technologies; and develop international standards to promote and protect those priorities.”  In response, NIST issued a plan in August 2019 for prioritizing federal agency engagement in the development of AI standards, identifying seven properties that characterize trustworthy AI—accuracy, explainability, resiliency, safety, reliability, objectivity, and security.

NIST’s white paper focuses on explainability and identifies four principles underlying explainable AI.

  • Explanation. AI systems must supply evidence, support, or reasoning for their outputs.  Researchers have developed different models to explain AI systems, such as self-explainable models where the models themselves are the provided explanation.
  • Meaningful. The recipient must understand the AI system’s explanation.  This principle is a contextual requirement—for example, different types of user groups may require different explanations, or a particular user’s prior knowledge, experiences, and mental processes may affect meaningfulness.  Hence, tailoring is necessary for effective communication.
  • Explanation Accuracy. The explanation must correctly reflect the AI system’s process for generating its output.  In contrast to decision accuracy, explanation accuracy is not concerned with whether or not the system’s judgment is correct.  It is referencing how the system came to its conclusion.  The principle is also contextual—there may be different explanation accuracy metrics for different types of groups and users.
  • Knowledge Limits. The AI system must identify cases it was not designed or approved to operate, or where its answers are not reliable.  This ensures that reliance on AI system’s decision processes occurs only where it is appropriate.

The white paper states that explanations generally can be described along two dimensions: the amount of time the consumer has to respond to the information and the level of detail in the explanation.  Although flexibility in the range and types of explanations will be necessary, NIST provides a non-exhaustive list of explanation categories, drawing from academic literature:

  • User Benefit.  This type of explanation is designed to inform a user about an AI system output, such as providing the reason a loan application was approved or denied to the applicant.
  • Societal Acceptance.  This type of explanation is designed to generate trust and acceptance by society, to provide an increased sense of comfort in the system.
  • Regulatory and Compliance.  This type of explanation assists with audits for compliance with regulations, standards, and legal requirements, such as providing detailed explanation to a safety regulator to evaluate the output of self-driving cars.
  • System Development.  This type of explanation assists with developing, improving, debugging, or maintaining an AI system by technical staff and product managers.
  • Owner Benefit.  This type of explanation benefits the operator of a system, such as a recommendation system that lists movies to watch and explains the selection based on previously viewed items.

After explaining the core concepts of explainable AI systems, NIST explores the explainability of human decision processes.  NIST states that humans demonstrate only a limited ability to meet the four principles described above, which provides a benchmark to evaluate explainable AI systems and informs the development of reasonable metrics.  According to NIST, evaluating explainability in context of human decision-making also may lead to better understanding of human-machine collaboration and interfaces.

Although the white paper does not provide detailed guidance for organizations implementing AI systems, it represents an important step by NIST to develop trustworthy AI tools.  Documents from other jurisdictions on explaining AI provide more detailed guidance aimed at helping organizations operationalize the concept of explainable AI.  The UK Information Commissioner’s Office (“ICO”), for example, issued on May 20, 2020 its final guidance on explaining decisions made with AI.  Similar to the NIST white paper, the ICO recognizes that there are different underlying principles to be followed and different models of AI explanation.  The ICO takes these principles one step further, however, and provides more detailed guidance on how to explain AI in practice, depending on the type of AI system used.

Some Legislative Developments Relating to NIST

Efforts to advance the development of AI standards through NIST has been a topic of increasing focus in Congress.  Recent bills include Sen. Cory Gardner’s (R-CO) Advancing Artificial Intelligence Research Act of 2020, which would appropriate $250 million to NIST for each of fiscal years 2021 through 2025 for the creation of a national program to advance AI research, and Rep. Eddie Bernice Johnson’s (D-TX-30) National Artificial Intelligence Initiative Act of 2020, which would appropriate over $50 million to NIST for each of fiscal years 2021 through 2025 for the research and development of voluntary standards for trustworthy AI systems, among other activities.  The House Appropriations Committee also released the draft fiscal year 2021 Commerce, Justice, Science, and Related Agencies funding bill, where $789 million is included for core NIST research activities, an increase of $35 million above the FY 2020 enacted level.

To learn more about AI, please access our AI Toolkit.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Sam Jungyun Choi Sam Jungyun Choi

Sam Jungyun Choi is an associate in the technology regulatory group in the London office. Her practice focuses on European data protection law and new policies and legislation relating to innovative technologies such as artificial intelligence, online platforms, digital health products and autonomous…

Sam Jungyun Choi is an associate in the technology regulatory group in the London office. Her practice focuses on European data protection law and new policies and legislation relating to innovative technologies such as artificial intelligence, online platforms, digital health products and autonomous vehicles. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.

Sam advises leading technology, software and life sciences companies on a wide range of matters relating to data protection and cybersecurity issues. Her work in this area has involved advising global companies on compliance with European data protection legislation, such as the General Data Protection Regulation (GDPR), the UK Data Protection Act, the ePrivacy Directive, and related EU and global legislation. She also advises on a variety of policy developments in Europe, including providing strategic advice on EU and national initiatives relating to artificial intelligence, data sharing, digital health, and online platforms.