On July 30, 2020, the UK Information Commissioner’s Office (“ICO”) published its final guidance on Artificial Intelligence (the “Guidance”).  The Guidance sets out a framework for auditing AI systems for compliance with data protection obligations under the GDPR and the UK Data Protection Act 2018.  The Guidance builds on the ICO’s earlier commitment to enable good data protection practice in AI, and on previous guidance and blogs issued on specific issues relating to AI (for example, on explaining decisions on AI, trade-offs, and bias and discrimination, all covered in Covington blogs).

The Guidance, which provides advice and recommendations on best practice in applying core GDPR principles to AI, will be of particular relevance to those that develop or integrate AI and/or machine-learning into their public-facing products and services.  The ICO suggests that organisations should adopt a risk-based approach when evaluating AI systems. The key takeaway is a familiar one: identification and mitigation of data protection risks at an early stage (i.e., the design stage) is likely to yield the best compliance results.

The Guidance has four parts, each dealing with the application of fundamental data protection principles to AI systems:

Part 1 – Accountability and Governance Implications

This section covers: (i) the use of data protection impact assessments (DPIAs) to identify and control the risks that AI systems may pose, (ii) understanding the relationship and distinction  between controllers and processors in the AI context, and (iii) managing competing interests when assessing AI-related risks (i.e., reconciling the use of sufficient AI training data with the principle of data minimisation).

The ICO’s recommendations include (among others):

  • Organisations should carry out DPIAs where appropriate. DPIAs are also a useful tool for documenting compliance with GDPR requirements, particularly those relating to accountability and “data protection by design”.
  • Organisations should ensure that the roles of the different parties in the AI supply chain are clearly mapped at the outset. Existing ICO guidance applies, and may help to identify controller/processor relationships. The AI Guidance also gives specific examples for stakeholders in the AI ecosystem.
  • If an AI system involves trade-offs between different risks, organisations should clearly document their assessments of competing interests to an auditable standard. Organisations should also document the methodology for identifying and assessing any trade-offs they have made.

Part 2 – Lawfulness, Fairness and Transparency

This section covers: (i) application of the lawfulness, fairness and transparency principles to AI systems, and (ii) how to identify appropriate purposes and legal bases in the AI context.

The ICO’s recommendations include (among others):

  • Organisations should clearly document (i) the source of any input data, (ii) whether the outputs of the AI system are “statistically informed guesses” as opposed to facts, and (iii) any inaccurate input data or statistical flaw in the AI system that might affect the quality of the output from the AI system.
  • Because the purposes and risks of processing associated with each phase often differ, organisations should consider separate legal bases for processing personal data at each stage of the AI development and deployment process. The Guidance also includes detailed recommendations for which legal bases should be used in certain situations.

Part 3 – Assessing Security and Data Minimisation

This section covers: (i) data security issues common to AI, (ii) types of privacy attacks to which AI systems are susceptible, and (iii) compliance with the principle of data minimisation.

The ICO’s recommendations include (among others):

  • Organisations should implement effective risk management practices, including by effectively tracking and managing training data, and ensuring “pipeline” security by separating the AI development environment from the rest of the organisation’s IT system.
  • Organisations should consider applying privacy-enhancing techniques (e.g., perturbation, federated learning, and the use of synthetic data) to training data to minimise the risk of tracing back to individuals.

Part 4 – Ensuring Data Subject Rights

This section covers: (i) fulfilling data subject rights in the context of data input and output of AI systems, and (ii) data subject rights in the context of automated decision-making.

The ICO’s recommendations include (among others):

  • Organisations should ensure that systems are in place to effectively respond to and comply with data subject rights requests. Organisations should avoid categorising data subject requests as “manifestly unfounded or excessive” simply because fulfilment of such requests is more challenging in the AI context.
  • Organisations should design AI systems to facilitate effective human review, and provide sufficient training to staff to ensure they can critically assess the outputs, and understand the limitations of, the AI system.

The ICO will continue to develop the Guidance, along with tools “that promote privacy by design to those developing and using AI”. This would appear to include a forthcoming “toolkit” to “provide further practical support to organisations auditing the compliance of their own AI systems”. The ICO encourages organisations to provide feedback on the Guidance to make sure that it remains relevant and consistent with emerging developments. In the Guidance, the ICO also indicates that it is planning separately to revise its Cloud Computing Guidance in 2021.

The Guidance comes a few weeks after the European Commission’s High-Level Expert Group on AI published its “Assessment List for Trustworthy Artificial Intelligence,” designed to help companies identify the risks of AI systems they develop, deploy or procure, as well as appropriate mitigation measures (the subject of a Covington blog available here).

The team at Covington will continue to monitor developments in this space.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Lisa Peets Lisa Peets

Lisa Peets leads the Technology Regulatory and Policy practice in the London office and is a member of the firm’s Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory counsel and legislative advocacy. In this context, she…

Lisa Peets leads the Technology Regulatory and Policy practice in the London office and is a member of the firm’s Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory counsel and legislative advocacy. In this context, she has worked closely with leading multinationals in a number of sectors, including many of the world’s best-known technology companies.

Lisa counsels clients on a range of EU law issues, including data protection and related regimes, copyright, e-commerce and consumer protection, and the rapidly expanding universe of EU rules applicable to existing and emerging technologies. Lisa also routinely advises clients in and outside of the technology sector on trade related matters, including EU trade controls rules.

According to the latest edition of Chambers UK (2022), “Lisa is able to make an incredibly quick legal assessment whereby she perfectly distils the essential matters from the less relevant elements.” “Lisa has subject matter expertise but is also able to think like a generalist and prioritise. She brings a strategic lens to matters.”

Photo of Sam Jungyun Choi Sam Jungyun Choi

Sam Jungyun Choi is an associate in the technology regulatory group in the London office. Her practice focuses on European data protection law and new policies and legislation relating to innovative technologies such as artificial intelligence, online platforms, digital health products and autonomous…

Sam Jungyun Choi is an associate in the technology regulatory group in the London office. Her practice focuses on European data protection law and new policies and legislation relating to innovative technologies such as artificial intelligence, online platforms, digital health products and autonomous vehicles. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.

Sam advises leading technology, software and life sciences companies on a wide range of matters relating to data protection and cybersecurity issues. Her work in this area has involved advising global companies on compliance with European data protection legislation, such as the General Data Protection Regulation (GDPR), the UK Data Protection Act, the ePrivacy Directive, and related EU and global legislation. She also advises on a variety of policy developments in Europe, including providing strategic advice on EU and national initiatives relating to artificial intelligence, data sharing, digital health, and online platforms.