An Expert Q&A with Mark Young of Covington & Burling LLP on the EU Cybersecurity Act and its new cybersecurity certification schemes for information and communication technology (ICT) products, services, and processes, especially internet of things (IoT) devices. It also discusses how the Act supports the EU Directive on the Security of Network and Information Systems (Directive 2016/1148/EC) (NIS Directive), the expanded role for the EU Agency for Cybersecurity (ENISA), and what companies need to know about timelines and enforcement. Continue Reading
The U.S. Patent and Trademark Office (USPTO) held its Artificial Intelligence: Intellectual Property Policy Considerations conference on January 31, 2019. The conference featured six panels of speakers, including policy makers, academics, and practitioners from Canada, China, Europe, Japan, and the United States. As stated by USPTO Director Iancu during his introductory remarks, the purpose of the conference is to begin discussions about the implications that artificial intelligence (“AI”) may have on intellectual property law and policy. In this post, we provide an overview of Director Iancu’s Introductory Remarks and of three of the conference panels that addressed several current and forward-looking issues that will impact patent law and society at large.
Opening Remarks by Director Iancu
The Director noted that governments around the world are adopting long-term comprehensive strategies to promote and provide leadership for technological advances of the future, and that America’s national security and economic prosperity depend on the United States’ ability to maintain a leadership role in AI and other emerging technologies.
The USPTO is using AI technology to increase the efficiency of patent examination. For example, the USPTO has developed and is exploring a new cognitive assistant called Unity which is intended to allow patent examiners to search across patents, publications, non-patent literature, and images with a single click. The Director concluded by stating that one of his top priorities is ensuring that the U.S. continues its leadership when it comes to innovation, particularly in the emerging technologies such as AI and machine learning. Continue Reading
On June 3, 2019, the UK Information Commissioner’s Office (“ICO”), released an Interim Report on a collaboration project with The Alan Turing Institute (“Institute”) called “Project ExplAIn.” The purpose of this project, according to the ICO, is to develop “practical guidance” for organizations on complying with UK data protection law when using artificial intelligence (“AI”) decision-making systems; in particular, to explain the impact AI decisions may have on individuals. This Interim Report may be of particular relevance to organizations considering how to meet transparency obligations when deploying AI systems that make automated decisions that fall within the scope of Article 22 of the GDPR.
On May 22, 2019, the thirty-six member countries, including the United States, of the Organization for Economic Cooperation and Development (the “OECD”) adopted a set of guidelines (“OECD Guidelines”) for the development and use of artificial intelligence (“AI”). Six countries not in the OECD, namely Argentina, Brazil, Colombia, Costa Rica, Peru and Romania, also were signatories to the OECD Guidelines.
The OECD Guidelines were drafted by over 50 AI experts from different disciplines and sectors over the past year and present international guidelines for emerging AI technologies to promote trustworthiness of AI. The OECD Guidelines provide five general principles for the signatory countries to adhere to: (1) stimulating inclusive growth, sustainable development and well-being through the use of AI; (2) focusing on human-centered values and fairness in the development and use of AI; (3) committing to transparency and explainability of AI; (4) ensuring that AI is robust, secure and safe throughout its lifecycle; and (5) requiring accountability for the proper functioning of AI and the preceding principles from organizations and individuals that deploy or operate AI.
The OECD Guidelines also present five recommendations to be implemented by signatory countries in drafting national policies: (1) engaging in long-term public investment, and encouraging private investment, in AI research and development; (2) fostering the development of a digital ecosystem for trustworthy AI; (3) promoting a policy environment for AI that enables smooth transitions from research and development to deployment and operation for trustworthy AI, including providing policy and regulatory frameworks and assessment mechanisms for AI; (4) building human capacity to effectively use and interact with AI and preparing for changes in the labor market, including ensuring fair transitions for workers displaced or affected by AI; and (5) cooperating internationally with other countries and stakeholders to progress responsible stewardship of trustworthy AI.
Finally, the OECD Guidelines instruct the OECD Committee on Digital Economic Policy (“CDEP”) to further develop a measurement framework for evidence-based AI policies and practical guidance on the implementation of the OECD Guidelines and to report on its progress to the OECD Council by end of December 2019. The CDEP is also tasked with providing a forum for exchanging information on AI policy and activities and monitoring the implementation of the OECD Guidelines, including by providing regular reports to the OECD Council beginning five years after the adoption of the OECD Guidelines.
On May 1, 2019, the UK’s Department for Digital, Culture, Media and Sport (“DCMS”) launched a public consultation (“Consultation”) regarding plans to pursue new laws aimed at securing internet connected devices. The Consultation follows the UK’s publication of its final Code of Practice for Consumer IoT Security (“Code of Practice”) last October (the subject of another Covington blog available here) and is targeted at device manufacturers, IoT service providers, mobile application developers, retailers and those with a direct or indirect interest in the field of consumer IoT security.
Federal and state policymakers introduced a range of new measures on artificial intelligence (“AI”) and the Internet of Things (“IoT”) in the first quarter of 2019. In our initial AI & IoT Quarterly Legislative Update, we detail the notable legislative events from this quarter on AI, IoT, cybersecurity as it relates to AI and IoT, and connected and autonomous vehicles (“CAVs”). Unlike prior years, in which federal lawmakers largely called for studies of these new technologies and supported investments in them, policymakers are increasingly introducing substantive proposals—particularly on AI and cybersecurity, and at the state level. Continue Reading
Earlier this month, the UK’s Information Commissioner’s Office published a draft code of practice (“Code”) on designing online services for children. The Code is now open for public consultation until May 31, 2019. The Code sets out 16 standards of “age appropriate design” with which online service providers should comply when designing online services (such as apps, connected toys, social media platforms, online games, educational websites and streaming services) that children under the age of 18 are likely to access. The standards are based on data protection law principles, and are legally enforceable under the GDPR and UK Data Protection Act 2018. The Code also provides further guidance on collecting consent from children and the legal basis for processing children’s personal data (see Annex A and B of the Code). The Code should be read in conjunction with the ICO’s current guidance on children and the GDPR.
The 16 standards set out in the Code are as follows:
- Best interests of the child. The best interests of the child should be the primary consideration when developing and designing online services that children are likely to access. This includes consideration for children’s online safety, physical and mental well-being, as well as development.
- Age-appropriate application. Online service providers should consider the age-range of users of the online service, including the needs and capabilities of children of different ages. Annex A of the Code provides some helpful guidance on key considerations at different ages, including the types of online services that children may encounter at different ages, their capacity to understand privacy information and ability to make meaningful decisions about their personal data.
- Transparency. Privacy information, policies and community standards provided to children must be concise, prominent and use clear language in an age-appropriate manner. ‘Bite-sized’ explanations should also be provided about how the personal data is used at the point that the child starts to use the service, with further age-appropriate prompts to speak with an adult before providing their data or not to proceed if uncertain.
- Detrimental use of data. Online service providers should refrain from using children’s personal data in ways that have been shown to be detrimental to their well-being, or that go against industry codes of practice, other regulatory provisions or Government advice. Examples of codes or advice that are likely to be relevant includes guidance from the Committee of Advertising Practice (CAP) that publishes guidance about online behavioural advertising which covers children.
- Policies and community standards. Online service providers should uphold their published terms, policies and community standards (including, but not limited to, privacy policies, age restriction, behaviour rules and content policies).
- Default Settings. ‘High privacy’ settings should be provided by default (unless the online service provider can demonstrate a compelling reason for a different default setting, taking account of the best interests of the child), thereby limiting visibility and accessibility of children’s personal data.
- Data minimisation. Online service providers should collect and retain only the minimum amount of personal data necessary to provide the elements of the service in which a child is actively and knowingly engaged. Children should be provided with as much choice as possible over which elements of the service they wish to use and how much data they provide. This choice includes whether they wish their personal data to be used for (each) additional purpose or service enhancement.
- Data sharing. Children’s personal data should not be shared or disclosed with third parties unless there is a compelling reason to do so, taking account of the best interests of the child. Due diligence checks should be conducted on any third party recipients of children’s data, and assurances should be obtained to ensure that sharing will not be detrimental to the well-being of the child.
- Geolocation. Geolocation options should be turned off by default unless there is a compelling reason otherwise, again taking account of the best interests of the child. Online service providers should ensure that the service clearly indicates to child users when location tracking is active. Options which make a child’s location visible to others must default back to “off” at the end of each session.
- Parental controls. Age-appropriate information should be provided to the child about parental controls, where provided. If the service allows a parent or caregiver to monitor their child’s online activity or track their location, such monitoring should be made clear to the child through the use of obvious signs. Audio or video materials should also be provided to children and parents about children’s rights to privacy.
- Profling. Profiling options must be turned off by default, unless there is a compelling reason for profiling, taking account of the best interests of the child. Profiling is only allowed if there are appropriate measures in place to protect the child from any harmful effects (in particular, being shown content that is detrimental to their health or well-being).
- Nudge techniques. Design features that suggest or encourage children to make a particular decision to provide unnecessary personal data, weaken or turn off their privacy protections, or extend their use, should not to be used. By contrast, pro-privacy nudges are permitted, where appropriate.
- Connected toys and devices. The Code applies to connected toys and devices, such as talking teddy bears, fitness bands or ‘home hub’ interactive speakers. Providers should provide clear, transparent information about who is processing the personal data and what their responsibilities are at the point of purchase and set up. Connected toys and devices should avoid passive collection of personal data (e.g., when in an inactive “listening mode” listening for key words that could wake the device).
- Online tools. Online service providers should provide prominent, age-appropriate and accessible tools to help children exercise their data protection rights and report concerns. The tools should also include methods for tracking the progress of complaints or requests, with clear information provided on response timescales.
- Data protection impact assessments (DPIAs). Online service providers that provide services that children may access should undertake a DPIA specifically to assess and mitigate risks to children. Annex C of the Code provides a template DPIA that modifies the ICO’s standard template DPIA to include a section for online service providers to consider each of the 16 standards in the Code.
- Governance and accountability. Online service providers should ensure that they have policies and procedures in place that demonstrate how providers comply with data protection obligations and the Code, including data protection training for all staff involved in the design and development of online services likely to be accessed by children.
From the Federal Communications Commission (“FCC”) to Congress to the White House, the federal government has continued to push the importance of investment and innovation in fifth-generation (“5G”) wireless technology. This push bodes well for the many industries that rely on the Internet of Things (“IoT”), such as transportation, healthcare, and manufacturing—to name a few. As we have previously discussed, 5G deployment is critical for IoT because the IoT ecosystem will rely heavily on the increased speeds and capacity, as well as the reduced latency, that 5G technology will enable. Below we discuss the most recent pushes for 5G developments from federal leadership before surveying key industries in the IoT ecosystem that we expect to benefit from these efforts. Continue Reading
The European Commission (“Commission”) has published a Recommendation on cybersecurity in the energy sector (“Recommendation”). The Recommendation builds on recent EU legislation in this area, including the NIS Directive and EU Cybersecurity Act (see our posts here and here). It sets out guidance to achieve a higher level of cybersecurity taking into account specific characteristics of the energy sector, including the use of legacy technology and interdependent systems across borders.
On 9 April 2019, the European Data Protection Board (“EDPB”) adopted new guidelines “on the processing of personal data under Article 6(1)(b) GDPR in the context of the provision of online services to data subjects.”
In general, the GDPR requires that processing of personal data be justified under a legal basis in Article 6 GDPR. One such legal basis is Article 6(1)(b), which covers data processing that is “necessary for the performance of a contract to which the data subject is party or in order to take steps at the request of the data subject prior to entering into a contract.” The new EDPB guidelines consider the meaning of this basis, and in particular whether it can be used as the basis for data processing by online services for purposes such as service improvement, fraud prevention, targeted advertising, and service personalization. Continue Reading