On July 17, 2020, the High-Level Expert Group on Artificial Intelligence set up by the European Commission (“AI HLEG”) published The Assessment List for Trustworthy Artificial Intelligence (“Assessment List”). The purpose of the Assessment List is to help companies identify the risks of AI systems they develop, deploy or procure, and implement appropriate measures to mitigate those risks.

The Assessment List is not mandatory, and there isn’t yet a self-certification scheme or other formal framework built around it that would enable companies to signal their adherence to it.  The AI HLEG notes that the Assessment List should be used flexibly; organizations can add or ignore elements as they see fit, taking into consideration the sector in which they operate. As we’ve discussed in our previous blog post here, the European Commission is currently developing policies and legislative proposals relating to trustworthy AI, and it is possible that the Assessment List may influence the Commission’s thinking on how organizations should operationalize requirements relating to this topic.

As a preliminary step, the AI HLEG recommends that organizations perform a fundamental rights impact assessment to establish whether the artificial intelligence system respects the fundamental rights of the EU Charter of Fundamental Rights and the European Convention on Human Rights. That assessment could include the following questions:

  1. Does the AI system potentially negatively discriminate against people on any basis?
    1. Have you put in place processes to test, monitor, address, and rectify potential negative discrimination bias?
  2. Does the AI system respect children’s rights?
    1. Have you put in place processes to test, monitor, address, and rectify potential harm to children?
  3. Does the AI system protect personal data relating to individuals in line with the EU’s General Data Protection Regulation (“GDPR”) (for example, requirements relating to data protection impact assessments or measures to safeguard personal data)?
  4. Does the AI system respect the rights to freedom of expression and information and/or freedom of assembly and association?
    1. Have you put in place processes to test, monitor, address, and rectify potential infringement on freedom of expression and information, and/or freedom of assembly and association?

Following the performance of the fundamental rights impact assessment, organizations can then proceed to carry out the self-assessment for trustworthy AI. The Assessment List proposes a set of questions for each of the seven requirements for trustworthy AI set out in the AI HLEG’s earlier Ethics Guidelines for Trustworthy Artificial Intelligence. A non-exhaustive list of the key questions relating to each of the seven requirements are as follows:

  1. Human Agency and Oversight
  • Is the AI system designed to interact with, guide, or take decisions by human end-users that affect humans or society?
  • Could the AI system generate confusion for some or all end-users or subjects on whether they are interacting with a human or AI system?
  • Could the AI system affect human autonomy by interfering with the end-user’s decision-making process in any other unintended and undesirable way?
  • Is the AI system a self-learning or autonomous system, or is it overseen by a Human-in-the-Loop/Human-on-the-Loop/Human-in-Command?
  • Did you establish any detection and response mechanisms for undesirable adverse effects of the AI system for the end-user or subject?
  1. Technical Robustness and Safety
  • Did you define risks, risk metrics and risk levels of the AI system in each specific use case?
  • Did you develop a mechanism to evaluate when the AI system has been changed in such a way as to merit a new review of its technical robustness and safety?
  • Did you put in place a series of steps to monitor and document the AI system’s accuracy?
  • Did you put in place a proper procedure for handling the cases where the AI system yields results with a low confidence score?
  1. Privacy and Data Governance
  • Did you put in place measures to ensure compliance with the GDPR or a non-European equivalent (e.g., data protection impact assessment, appointment of a Data Protection Officer, data minimization, etc.)?
  • Did you implement the right to withdraw consent, the right to object, and the right to be forgotten into the development of the AI system?
  • Did you consider the privacy and data protection implications of data collected, generated, or processed over the course of the AI system’s life cycle?
  1. Transparency
  • Did you put in place measures that address the traceability of the AI system during its entire lifecycle?
  • Did you explain the decision(s) of the AI system to the users?
  • Did you establish mechanisms to inform users about the purpose, criteria, and
  • limitations of the decision(s) generated by the AI system?
  1. Diversity, Non-discrimination, and Fairness
  • Did you establish a strategy or a set of procedures to avoid creating or reinforcing unfair bias in the AI system, both regarding the use of input data as well as for the algorithm design?
  • Did you ensure a mechanism that allows for the flagging of issues related to bias, discrimination or poor performance of the AI system?
  • Did you assess whether the AI system’s user interface is usable by those with special needs or disabilities or those at risk of exclusion?
  1. Societal and Environmental Well-being
  • Where possible, did you establish mechanisms to evaluate the environmental impact of the AI system’s development, deployment and/or use (for example, the amount of energy used and carbon emissions)?
  • Could the AI system create the risk of de-skilling of the workforce? Did you take measures to counteract de-skilling risks?
  • Does the system promote or require new (digital) skills? Did you provide training opportunities and materials for re- and up-skilling?
  • Did you assess the societal impact of the AI system’s use beyond the (end-)user and subject, such as potentially indirectly affected stakeholders or society at large?
  1. Accountability
  • Did you establish mechanisms that facilitate the AI system’s auditability (e.g., traceability of the development process, the sourcing of training data and the logging of the AI system’s processes, outcomes, positive and negative impact)?
  • Did you ensure that the AI system can be audited by independent third parties?
  • Did you establish a process to discuss and continuously monitor and assess the AI system’s adherence to the Assessment List?
  • For applications that can adversely affect individuals, have redress by design mechanisms been put in place?

The Assessment List is part of the EU’s strategy on artificial intelligence outlined in the communication released by the European Commission in April 2018. A previous version of the Assessment List was included in April 2019 Ethics Guidelines for Trustworthy AI issued by the AI HLEG, which we discussed in our prior blog post here. The revised Assessment List reflects the learnings from the piloting phase from 26 June until 1 December 2019 in which over 350 stakeholders participated.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Sam Jungyun Choi Sam Jungyun Choi

Sam Jungyun Choi is an associate in the technology regulatory group in the London office. Her practice focuses on European data protection law and new policies and legislation relating to innovative technologies such as artificial intelligence, online platforms, digital health products and autonomous…

Sam Jungyun Choi is an associate in the technology regulatory group in the London office. Her practice focuses on European data protection law and new policies and legislation relating to innovative technologies such as artificial intelligence, online platforms, digital health products and autonomous vehicles. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.

Sam advises leading technology, software and life sciences companies on a wide range of matters relating to data protection and cybersecurity issues. Her work in this area has involved advising global companies on compliance with European data protection legislation, such as the General Data Protection Regulation (GDPR), the UK Data Protection Act, the ePrivacy Directive, and related EU and global legislation. She also advises on a variety of policy developments in Europe, including providing strategic advice on EU and national initiatives relating to artificial intelligence, data sharing, digital health, and online platforms.

Photo of Marty Hansen Marty Hansen

Martin Hansen has represented some of the world’s leading information technology, telecommunications, and pharmaceutical companies on a broad range of cutting edge international trade, intellectual property, and competition issues. Martin has extensive experience in advising clients on matters arising under the World Trade…

Martin Hansen has represented some of the world’s leading information technology, telecommunications, and pharmaceutical companies on a broad range of cutting edge international trade, intellectual property, and competition issues. Martin has extensive experience in advising clients on matters arising under the World Trade Organization agreements, treaties administered by the World Intellectual Property Organization, bilateral and regional free trade agreements, and other trade agreements.

Drawing on ten years of experience in Covington’s London and DC offices his practice focuses on helping innovative companies solve challenges on intellectual property and trade matters before U.S. courts, the U.S. government, and foreign governments and tribunals. Martin also represents software companies and a leading IT trade association on electronic commerce, Internet security, and online liability issues.

Photo of Anna Oberschelp de Meneses Anna Oberschelp de Meneses

Anna Sophia Oberschelp de Meneses is an associate in the Data Privacy and Cybersecurity Practice Group.  Anna is a qualified Portuguese lawyer, but is both a native Portuguese and German speaker.  Anna advises companies on European data protection law and helps clients coordinate…

Anna Sophia Oberschelp de Meneses is an associate in the Data Privacy and Cybersecurity Practice Group.  Anna is a qualified Portuguese lawyer, but is both a native Portuguese and German speaker.  Anna advises companies on European data protection law and helps clients coordinate international data protection law projects.  She has obtained a certificate for “corporate data protection officer” by the German Association for Data Protection and Data Security (“Gesellschaft für Datenschutz und Datensicherheit e.V.”). She is also Certified Information Privacy Professional Europe (CIPPE/EU) by the International Association of Privacy Professionals (IAPP).  Anna also advises companies in the field of EU consumer law and has been closely tracking the developments in this area.  Her extensive language skills allow her to monitor developments and help clients tackle EU Data Privacy, Cybersecurity and Consumer Law issues in various EU and ROW jurisdictions.

Photo of Lisa Peets Lisa Peets

Lisa Peets leads the Technology Regulatory and Policy practice in the London office and is a member of the firm’s Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory counsel and legislative advocacy. In this context, she…

Lisa Peets leads the Technology Regulatory and Policy practice in the London office and is a member of the firm’s Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory counsel and legislative advocacy. In this context, she has worked closely with leading multinationals in a number of sectors, including many of the world’s best-known technology companies.

Lisa counsels clients on a range of EU law issues, including data protection and related regimes, copyright, e-commerce and consumer protection, and the rapidly expanding universe of EU rules applicable to existing and emerging technologies. Lisa also routinely advises clients in and outside of the technology sector on trade related matters, including EU trade controls rules.

According to the latest edition of Chambers UK (2022), “Lisa is able to make an incredibly quick legal assessment whereby she perfectly distils the essential matters from the less relevant elements.” “Lisa has subject matter expertise but is also able to think like a generalist and prioritise. She brings a strategic lens to matters.”

Photo of Lindsey Tonsager Lindsey Tonsager

Lindsey Tonsager co-chairs the firm’s global Data Privacy and Cybersecurity practice. She advises clients in their strategic and proactive engagement with the Federal Trade Commission, the U.S. Congress, the California Privacy Protection Agency, and state attorneys general on proposed changes to data protection…

Lindsey Tonsager co-chairs the firm’s global Data Privacy and Cybersecurity practice. She advises clients in their strategic and proactive engagement with the Federal Trade Commission, the U.S. Congress, the California Privacy Protection Agency, and state attorneys general on proposed changes to data protection laws, and regularly represents clients in responding to investigations and enforcement actions involving their privacy and information security practices.

Lindsey’s practice focuses on helping clients launch new products and services that implicate the laws governing the use of artificial intelligence, data processing for connected devices, biometrics, online advertising, endorsements and testimonials in advertising and social media, the collection of personal information from children and students online, e-mail marketing, disclosures of video viewing information, and new technologies.

Lindsey also assesses privacy and data security risks in complex corporate transactions where personal data is a critical asset or data processing risks are otherwise material. In light of a dynamic regulatory environment where new state, federal, and international data protection laws are always on the horizon and enforcement priorities are shifting, she focuses on designing risk-based, global privacy programs for clients that can keep pace with evolving legal requirements and efficiently leverage the clients’ existing privacy policies and practices. She conducts data protection assessments to benchmark against legal requirements and industry trends and proposes practical risk mitigation measures.

Photo of Kristof Van Quathem Kristof Van Quathem

Kristof Van Quathem advises clients on data protection, data security and cybercrime matters in various sectors, and in particular in the pharmaceutical and information technology sector. Kristof has been specializing in this area for over fifteen years and covers the entire spectrum of…

Kristof Van Quathem advises clients on data protection, data security and cybercrime matters in various sectors, and in particular in the pharmaceutical and information technology sector. Kristof has been specializing in this area for over fifteen years and covers the entire spectrum of advising clients on government affairs strategies concerning the lawmaking, to compliance advice on the adopted laws regulations and guidelines, and the representation of clients in non-contentious and contentious matters before data protection authorities.