The UK’s Information Commissioner’s Office (“ICO”) has issued and is consulting on draft guidance about explaining decisions made by AI. The ICO prepared the guidance with The Alan Turing Institute, which is the UK’s national institute for data science and artificial intelligence. Among other things, the guidance sets out key principles to follow and steps to take when explaining AI-assisted decisions — including in relation to different types of AI algorithms — and the policies and procedures that organizations should consider putting in place.
The draft guidance builds upon the ICO’s previous work in this area, including its AI Auditing Framework, June 2019 Project ExplAIN interim report, and September 2017 paper ‘Big data, artificial intelligence, machine learning and data protection’. (Previous blog posts that track this issue are available here.) Elements of the new draft guidance touch on points that go beyond narrow GDPR requirements, such as AI ethics (see, in particular, the recommendation to provide explanations of the fairness or societal impacts of AI systems). Other sections of the guidance are quite technical; for example, the ICO provides its own analysis of the possible uses and interpretability of eleven specific types of AI algorithms.
Organizations that develop, test or deploy AI decision-making systems should review the draft guidance and consider responding to the consultation. The consultation is open until January 24, 2020. A final version is expected to be published later next year.