On June 10, 2019, the UK Government’s Digital Service and the Office for Artificial Intelligence released guidance on using artificial intelligence in the public sector (the “Guidance”).  The Guidance aims to provide practical guidance for public sector organizations when they implement artificial intelligence (AI) solutions.

The Guidance will be of interest to companies that provide AI solutions to UK public sector organizations, as it will influence what kinds of AI projects public sector organizations will be interested in pursuing, and the processes that they will go through to implement AI systems.  Because the UK’s National Health Service (NHS) is a public sector organization, this Guidance is also likely to be relevant to digital health service providers that are seeking to provide AI technologies to NHS organizations.

The Guidance consists of three sections: (1) understanding AI; (2) assessing, planning and managing AI; (3) using AI ethically and safely, as summarized below. The guidance also has links to summaries of examples where AI systems have been used in the public sector and elsewhere.

Understanding AI

The introductory section of the Guidance on understanding AI defines AI as “the use of digital technology to create systems capable of performing tasks commonly thought to require intelligence.”  The Guidance provides that AI systems must comply with applicable laws, calling out in particular the GDPR, and specifically the obligations on automated decision-making. (As discussed in our earlier blog post, the ICO has previously highlighted the relevance of Article 22 of the GDPR on automated decision-making in their Interim Report on Project ExplAIn.)

The Guidance also explains that the UK Government has created three new bodies and two new funds to help integrate AI into the private and public sectors. The three new bodies are the AI Council, the Office for AI, and the Centre for Data Ethics and Innovation; the two funds are the Gov-Tech Catalyst and the Regulator’s Pioneer Fund.

Assessing, Planning and Managing AI

When assessing AI systems, and in particular how to build or buy them, the Guidance recommends that public sector organizations should:

  • Assess which AI technology is suitable for the situation. The Guidance describes, at a high-level, several types of common machine learning techniques and applications of machine learning;
  • Obtain approval from the Government Digital Services by carrying out discovery to show feasibility. Most AI solutions are categorized as ‘novel’, and therefore requiring further scrutiny;
  • Define their purchasing strategy, in the same way as they would for any other technology;
  • Address ethical concerns and comply with forthcoming guidance from the Office of AI and the World Economic Forum on AI procurement;
  • Allocate responsibility and governance for AI projects with partnering organizations and make sure that the team building and managing the AI project has appropriate skills and resources.

The Guidance also outlines a three-phase plan that organizations typically follow when planning and preparing to implement AI systems:

  1. Discovery. In this phase, organizations must assess whether AI is right for their needs. If it is, they will prepare their data and will build an AI implementation team (normally comprised of a data scientist, data engineer, data architect, and ethicist). Data should be made secure in accordance with guidance from the National Cyber Security Centre (“NCSC”) and by complying with applicable data protection law.
  2. Alpha Phase. Data is divided into a training set, a validation set and a test set. A base model is used as a benchmark and more complex models are created to suit the client’s problem. The best of these models is tested and evaluated economically, ethically and socially.
  3. Beta Phase. The chosen model is integrated and performance tested. The product is continually evaluated and improved versions are created and deployed – a specialist team is maintained to carry out these improvements.

The Guidance stresses the importance of having appropriate governance in place in order to manage the risks that arise from the implementation of AI systems. The section on managing AI projects outlines a number of factors that organizations should consider when running AI projects, and provides a table of common risks that arise in AI projects along with recommended mitigation measures.

Using AI ethically and Safely

The section of the Guidance on using AI ethically and safely is addressed to all parties involved in the design, production, and deployment of AI projects, including data scientists, data engineers, domain experts, delivery managers and departmental leads.  The Guidance summarizes the Alan Turing Institute’s detailed guidance, published as part of their public policy programme, and is designed to work within the UK Government’s August 2018 Data Ethics Framework.

The Guidance focuses heavily on the need for a human-centric approach to AI systems.  This aligns with positions of other forums (such as the European Commission’s High Level Working Group’s Ethics Guidelines for Trustworthy AI – see our blog here). The Guidance stresses the importance of building a culture of responsible innovation, and recommends that the governance architecture of AI systems should consist of: (1) a framework of ethical values; (2) a set of actionable principles; and (3) a process-based governance framework.

The Guidance points to the Alan Turing Institute’s recommended ethical values:

  • Respect the dignity of individuals;
  • Connect with each other sincerely, openly, and inclusively;
  • Care for the wellbeing of all; and
  • Protect the priorities of social values, justice, and public interest

Organizations should pursue these ethical values through four “FAST Track principles”, which are:

  • Fairness (being unbiased and using fair data);
  • Accountability (having a clear chain of accountability and system of review);
  • Sustainability (making sure the project is safe and has longevity); and
  • Transparency (decisions should be explained and justified).

Organizations should bring these values and principles together in an integrated process-based governance framework, which should encompass:

  • the relevant team members and roles involved in each governance action;
  • the relevant stages of the workflow in which intervention and targeted consideration are necessary to meet governance goals;
  • explicit timeframes for any evaluations, follow-up actions, re-assessments, and continuous monitoring; and
  • clear and well-defined protocols for logging activity and for implementing mechanisms to support end-to-end auditability.

Governance and ethics of AI systems is currently a hot topic, with a number of different guidelines and approaches emerging in the UK, the EU and other jurisdictions. Organizations developing AI technologies or adopting AI solutions should keep abreast of the evolving landscape in this field, and consider providing input to policymakers.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Lisa Peets Lisa Peets

Lisa Peets leads the Technology Regulatory and Policy practice in the London office and is a member of the firm’s Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory counsel and legislative advocacy. In this context, she…

Lisa Peets leads the Technology Regulatory and Policy practice in the London office and is a member of the firm’s Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory counsel and legislative advocacy. In this context, she has worked closely with leading multinationals in a number of sectors, including many of the world’s best-known technology companies.

Lisa counsels clients on a range of EU law issues, including data protection and related regimes, copyright, e-commerce and consumer protection, and the rapidly expanding universe of EU rules applicable to existing and emerging technologies. Lisa also routinely advises clients in and outside of the technology sector on trade related matters, including EU trade controls rules.

According to the latest edition of Chambers UK (2022), “Lisa is able to make an incredibly quick legal assessment whereby she perfectly distils the essential matters from the less relevant elements.” “Lisa has subject matter expertise but is also able to think like a generalist and prioritise. She brings a strategic lens to matters.”

Photo of Marty Hansen Marty Hansen

Martin Hansen has represented some of the world’s leading information technology, telecommunications, and pharmaceutical companies on a broad range of cutting edge international trade, intellectual property, and competition issues. Martin has extensive experience in advising clients on matters arising under the World Trade…

Martin Hansen has represented some of the world’s leading information technology, telecommunications, and pharmaceutical companies on a broad range of cutting edge international trade, intellectual property, and competition issues. Martin has extensive experience in advising clients on matters arising under the World Trade Organization agreements, treaties administered by the World Intellectual Property Organization, bilateral and regional free trade agreements, and other trade agreements.

Drawing on ten years of experience in Covington’s London and DC offices his practice focuses on helping innovative companies solve challenges on intellectual property and trade matters before U.S. courts, the U.S. government, and foreign governments and tribunals. Martin also represents software companies and a leading IT trade association on electronic commerce, Internet security, and online liability issues.

Photo of Sam Jungyun Choi Sam Jungyun Choi

Sam Jungyun Choi is an associate in the technology regulatory group in the London office. Her practice focuses on European data protection law and new policies and legislation relating to innovative technologies such as artificial intelligence, online platforms, digital health products and autonomous…

Sam Jungyun Choi is an associate in the technology regulatory group in the London office. Her practice focuses on European data protection law and new policies and legislation relating to innovative technologies such as artificial intelligence, online platforms, digital health products and autonomous vehicles. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.

Sam advises leading technology, software and life sciences companies on a wide range of matters relating to data protection and cybersecurity issues. Her work in this area has involved advising global companies on compliance with European data protection legislation, such as the General Data Protection Regulation (GDPR), the UK Data Protection Act, the ePrivacy Directive, and related EU and global legislation. She also advises on a variety of policy developments in Europe, including providing strategic advice on EU and national initiatives relating to artificial intelligence, data sharing, digital health, and online platforms.