UK Government’s Guide to Using AI in the Public Sector

On June 10, 2019, the UK Government’s Digital Service and the Office for Artificial Intelligence released guidance on using artificial intelligence in the public sector (the “Guidance”).  The Guidance aims to provide practical guidance for public sector organizations when they implement artificial intelligence (AI) solutions.

The Guidance will be of interest to companies that provide AI solutions to UK public sector organizations, as it will influence what kinds of AI projects public sector organizations will be interested in pursuing, and the processes that they will go through to implement AI systems.  Because the UK’s National Health Service (NHS) is a public sector organization, this Guidance is also likely to be relevant to digital health service providers that are seeking to provide AI technologies to NHS organizations.

The Guidance consists of three sections: (1) understanding AI; (2) assessing, planning and managing AI; (3) using AI ethically and safely, as summarized below. The guidance also has links to summaries of examples where AI systems have been used in the public sector and elsewhere.

Continue Reading

Privacy Shield Ombudsperson Confirmed by the Senate

On June 20, 2019, Keith Krach was confirmed by the U.S. Senate to become the Trump administration’s first permanent Privacy Shield Ombudsperson at the State Department.  The role of the Privacy Shield Ombudsperson is to act as an additional redress avenue for all EU data subjects whose data is transferred from the EU or Switzerland to the U.S. under the EU-U.S. and the Swiss-U.S. Privacy Shield Framework, respectively.

Continue Reading

IoT Update: Senators Introduce Legislation to Regulate Privacy and Security of Wearable Health Devices and Genetic Testing Kits

Last week, Senators Amy Klobuchar (D-MN) and Lisa Murkowski (R-AK) introduced the Protecting Personal Health Data Act (S. 1842), which would provide new privacy and security rules from the Department of Health and Human Services (“HHS”) for technologies that collect personal health data, such as wearable fitness trackers, social-media sites focused on health data or conditions, and direct-to-consumer genetic testing services, among other technologies.  Specifically, the legislation would direct the HHS Secretary to issue regulations relating to the privacy and security of health-related consumer devices, services, applications, and software. These new regulations will also cover a new category of personal health data that is otherwise not protected health information under HIPAA. Continue Reading

IoT Update: Expert Q&A on the EU Cybersecurity Act

An Expert Q&A with Mark Young of Covington & Burling LLP on the EU Cybersecurity Act and its new cybersecurity certification schemes for information and communication technology (ICT) products, services, and processes, especially internet of things (IoT) devices. It also discusses how the Act supports the EU Directive on the Security of Network and Information Systems (Directive 2016/1148/EC) (NIS Directive), the expanded role for the EU Agency for Cybersecurity (ENISA), and what companies need to know about timelines and enforcement. Continue Reading

Artificial Intelligence and the Patent Landscape – Views from the USPTO AI: Intellectual Property Policy Considerations Conference

The U.S. Patent and Trademark Office (USPTO) held its Artificial Intelligence: Intellectual Property Policy Considerations conference on January 31, 2019. The conference featured six panels of speakers, including policy makers, academics, and practitioners from Canada, China, Europe, Japan, and the United States. As stated by USPTO Director Iancu during his introductory remarks, the purpose of the conference is to begin discussions about the implications that artificial intelligence (“AI”) may have on intellectual property law and policy. In this post, we provide an overview of Director Iancu’s Introductory Remarks and of three of the conference panels that addressed several current and forward-looking issues that will impact patent law and society at large.

Opening Remarks by Director Iancu

The Director noted that governments around the world are adopting long-term comprehensive strategies to promote and provide leadership for technological advances of the future, and that America’s national security and economic prosperity depend on the United States’ ability to maintain a leadership role in AI and other emerging technologies.

The USPTO is using AI technology to increase the efficiency of patent examination. For example, the USPTO has developed and is exploring a new cognitive assistant called Unity which is intended to allow patent examiners to search across patents, publications, non-patent literature, and images with a single click. The Director concluded by stating that one of his top priorities is ensuring that the U.S. continues its leadership when it comes to innovation, particularly in the emerging technologies such as AI and machine learning. Continue Reading

AI Update: ICO’s Interim Report on Explaining AI

On June 3, 2019, the UK Information Commissioner’s Office (“ICO”), released an Interim Report on a collaboration project with The Alan Turing Institute (“Institute”) called “Project ExplAIn.” The purpose of this project, according to the ICO, is to develop “practical guidance” for organizations on complying with UK data protection law when using artificial intelligence (“AI”) decision-making systems; in particular, to explain the impact AI decisions may have on individuals. This Interim Report may be of particular relevance to organizations considering how to meet transparency obligations when deploying AI systems that make automated decisions that fall within the scope of Article 22 of the GDPR.

Continue Reading

AI Update: OECD Adopts AI Policy Guidelines

On May 22, 2019, the thirty-six member countries, including the United States, of the Organization for Economic Cooperation and Development (the “OECD”) adopted a set of guidelines (“OECD Guidelines”) for the development and use of artificial intelligence (“AI”).  Six countries not in the OECD, namely Argentina, Brazil, Colombia, Costa Rica, Peru and Romania, also were signatories to the OECD Guidelines.

The OECD Guidelines were drafted by over 50 AI experts from different disciplines and sectors over the past year and present international guidelines for emerging AI technologies to promote trustworthiness of AI.  The OECD Guidelines provide five general principles for the signatory countries to adhere to: (1) stimulating inclusive growth, sustainable development and well-being through the use of AI; (2) focusing on human-centered values and fairness in the development and use of AI; (3) committing to transparency and explainability of AI; (4) ensuring that AI is robust, secure and safe throughout its lifecycle; and (5) requiring accountability for the proper functioning of AI and the preceding principles from organizations and individuals that deploy or operate AI.

The OECD Guidelines also present five recommendations to be implemented by signatory countries in drafting national policies: (1) engaging in long-term public investment, and encouraging private investment, in AI research and development; (2) fostering the development of a digital ecosystem for trustworthy AI; (3) promoting a policy environment for AI that enables smooth transitions from research and development to deployment and operation for trustworthy AI, including providing policy and regulatory frameworks and assessment mechanisms for AI; (4) building human capacity to effectively use and interact with AI and preparing for changes in the labor market, including ensuring fair transitions for workers displaced or affected by AI; and (5) cooperating internationally with other countries and stakeholders to progress responsible stewardship of trustworthy AI.

Finally, the OECD Guidelines instruct the OECD Committee on Digital Economic Policy (“CDEP”) to further develop a measurement framework for evidence-based AI policies and practical guidance on the implementation of the OECD Guidelines and to report on its progress to the OECD Council by end of December 2019.  The CDEP is also tasked with providing a forum for exchanging information on AI policy and activities and monitoring the implementation of the OECD Guidelines, including by providing regular reports to the OECD Council beginning five years after the adoption of the OECD Guidelines.

IoT Update: The UK Announces Plans for New Connected Device Laws

On May 1, 2019, the UK’s Department for Digital, Culture, Media and Sport (“DCMS”) launched a public consultation (“Consultation”) regarding plans to pursue new laws aimed at securing internet connected devices. The Consultation follows the UK’s publication of its final Code of Practice for Consumer IoT Security (“Code of Practice”) last October (the subject of another Covington blog available here) and is targeted at device manufacturers, IoT service providers, mobile application developers, retailers and those with a direct or indirect interest in the field of consumer IoT security.

Continue Reading

AI and IoT Legislative Developments: First Quarter 2019

Federal and state policymakers introduced a range of new measures on artificial intelligence (“AI”) and the Internet of Things (“IoT”) in the first quarter of 2019. In our initial AI & IoT Quarterly Legislative Update, we detail the notable legislative events from this quarter on AI, IoT, cybersecurity as it relates to AI and IoT, and connected and autonomous vehicles (“CAVs”). Unlike prior years, in which federal lawmakers largely called for studies of these new technologies and supported investments in them, policymakers are increasingly introducing substantive proposals—particularly on AI and cybersecurity, and at the state level. Continue Reading

ICO issues draft code of practice on designing online services for children

Earlier this month, the UK’s Information Commissioner’s Office published a draft code of practice (“Code”) on designing online services for children. The Code  is now open for public consultation until May 31, 2019. The Code sets out 16 standards of “age appropriate design” with which online service providers should comply when designing online services (such as apps, connected toys, social media platforms, online games, educational websites and streaming services) that children under the age of 18 are likely to access. The standards are based on data protection law principles, and are legally enforceable under the GDPR and UK Data Protection Act 2018. The Code also provides further guidance on collecting consent from children and the legal basis for processing children’s personal data (see Annex A and B of the Code). The Code should be read in conjunction with the ICO’s current guidance on children and the GDPR.

The 16 standards set out in the Code are as follows:

  1. Best interests of the child. The best interests of the child should be the primary consideration when developing and designing online services that children are likely to access. This includes consideration for children’s online safety, physical and mental well-being, as well as development.
  2. Age-appropriate application. Online service providers should consider the age-range of users of the online service, including the needs and capabilities of children of different ages. Annex A of the Code provides some helpful guidance on key considerations at different ages, including the types of online services that children may encounter at different ages, their capacity to understand privacy information and ability to make meaningful decisions about their personal data.
  3. Transparency. Privacy information, policies and community standards provided to children must be concise, prominent and use clear language in an age-appropriate manner. ‘Bite-sized’ explanations should also be provided about how the personal data is used at the point that the child starts to use the service, with further age-appropriate prompts to speak with an adult before providing their data or not to proceed if uncertain.
  4. Detrimental use of data. Online service providers should refrain from using children’s personal data in ways that have been shown to be detrimental to their well-being, or that go against industry codes of practice, other regulatory provisions or Government advice. Examples of codes or advice that are likely to be relevant includes guidance from the Committee of Advertising Practice (CAP) that publishes guidance about online behavioural advertising which covers children.
  5. Policies and community standards. Online service providers should uphold their published terms, policies and community standards (including, but not limited to, privacy policies, age restriction, behaviour rules and content policies).
  6. Default Settings. ‘High privacy’ settings should be provided by default (unless the online service provider can demonstrate a compelling reason for a different default setting, taking account of the best interests of the child), thereby limiting visibility and accessibility of children’s personal data.
  7. Data minimisation. Online service providers should collect and retain only the minimum amount of personal data necessary to provide the elements of the service in which a child is actively and knowingly engaged. Children should be provided with as much choice as possible over which elements of the service they wish to use and how much data they provide. This choice includes whether they wish their personal data to be used for (each) additional purpose or service enhancement.
  8. Data sharing. Children’s personal data should not be shared or disclosed with third parties unless there is a compelling reason to do so, taking account of the best interests of the child. Due diligence checks should be conducted on any third party recipients of children’s data, and assurances should be obtained to ensure that sharing will not be detrimental to the well-being of the child.
  9. Geolocation. Geolocation options should be turned off by default unless there is a compelling reason otherwise, again taking account of the best interests of the child. Online service providers should ensure that the service clearly indicates to child users when location tracking is active. Options which make a child’s location visible to others must default back to “off” at the end of each session.
  10. Parental controls. Age-appropriate information should be provided to the child about parental controls, where provided. If the service allows a parent or caregiver to monitor their child’s online activity or track their location, such monitoring should be made clear to the child through the use of obvious signs. Audio or video materials should also be provided to children and parents about children’s rights to privacy.
  11. Profling. Profiling options must be turned off by default, unless there is a compelling reason for profiling, taking account of the best interests of the child. Profiling is only allowed if there are appropriate measures in place to protect the child from any harmful effects (in particular, being shown content that is detrimental to their health or well-being).
  12. Nudge techniques. Design features that suggest or encourage children to make a particular decision to provide unnecessary personal data, weaken or turn off their privacy protections, or extend their use, should not to be used. By contrast, pro-privacy nudges are permitted, where appropriate.
  13. Connected toys and devices. The Code applies to connected toys and devices, such as talking teddy bears, fitness bands or ‘home hub’ interactive speakers. Providers should provide clear, transparent information about who is processing the personal data and what their responsibilities are at the point of purchase and set up. Connected toys and devices should avoid passive collection of personal data (e.g., when in an inactive “listening mode” listening for key words that could wake the device).
  14. Online tools. Online service providers should provide prominent, age-appropriate and accessible tools to help children exercise their data protection rights and report concerns. The tools should also include methods for tracking the progress of complaints or requests, with clear information provided on response timescales.
  15. Data protection impact assessments (DPIAs). Online service providers that provide services that children may access should undertake a DPIA specifically to assess and mitigate risks to children. Annex C of the Code provides a template DPIA that modifies the ICO’s standard template DPIA to include a section for online service providers to consider each of the 16 standards in the Code.
  16. Governance and accountability. Online service providers should ensure that they have policies and procedures in place that demonstrate how providers comply with data protection obligations and the Code, including data protection training for all staff involved in the design and development of online services likely to be accessed by children.
LexBlog