AI and IoT Legislative Developments: First Quarter 2019

Federal and state policymakers introduced a range of new measures on artificial intelligence (“AI”) and the Internet of Things (“IoT”) in the first quarter of 2019. In our initial AI & IoT Quarterly Legislative Update, we detail the notable legislative events from this quarter on AI, IoT, cybersecurity as it relates to AI and IoT, and connected and autonomous vehicles (“CAVs”). Unlike prior years, in which federal lawmakers largely called for studies of these new technologies and supported investments in them, policymakers are increasingly introducing substantive proposals—particularly on AI and cybersecurity, and at the state level. Continue Reading

ICO issues draft code of practice on designing online services for children

Earlier this month, the UK’s Information Commissioner’s Office published a draft code of practice (“Code”) on designing online services for children. The Code  is now open for public consultation until May 31, 2019. The Code sets out 16 standards of “age appropriate design” with which online service providers should comply when designing online services (such as apps, connected toys, social media platforms, online games, educational websites and streaming services) that children under the age of 18 are likely to access. The standards are based on data protection law principles, and are legally enforceable under the GDPR and UK Data Protection Act 2018. The Code also provides further guidance on collecting consent from children and the legal basis for processing children’s personal data (see Annex A and B of the Code). The Code should be read in conjunction with the ICO’s current guidance on children and the GDPR.

The 16 standards set out in the Code are as follows:

  1. Best interests of the child. The best interests of the child should be the primary consideration when developing and designing online services that children are likely to access. This includes consideration for children’s online safety, physical and mental well-being, as well as development.
  2. Age-appropriate application. Online service providers should consider the age-range of users of the online service, including the needs and capabilities of children of different ages. Annex A of the Code provides some helpful guidance on key considerations at different ages, including the types of online services that children may encounter at different ages, their capacity to understand privacy information and ability to make meaningful decisions about their personal data.
  3. Transparency. Privacy information, policies and community standards provided to children must be concise, prominent and use clear language in an age-appropriate manner. ‘Bite-sized’ explanations should also be provided about how the personal data is used at the point that the child starts to use the service, with further age-appropriate prompts to speak with an adult before providing their data or not to proceed if uncertain.
  4. Detrimental use of data. Online service providers should refrain from using children’s personal data in ways that have been shown to be detrimental to their well-being, or that go against industry codes of practice, other regulatory provisions or Government advice. Examples of codes or advice that are likely to be relevant includes guidance from the Committee of Advertising Practice (CAP) that publishes guidance about online behavioural advertising which covers children.
  5. Policies and community standards. Online service providers should uphold their published terms, policies and community standards (including, but not limited to, privacy policies, age restriction, behaviour rules and content policies).
  6. Default Settings. ‘High privacy’ settings should be provided by default (unless the online service provider can demonstrate a compelling reason for a different default setting, taking account of the best interests of the child), thereby limiting visibility and accessibility of children’s personal data.
  7. Data minimisation. Online service providers should collect and retain only the minimum amount of personal data necessary to provide the elements of the service in which a child is actively and knowingly engaged. Children should be provided with as much choice as possible over which elements of the service they wish to use and how much data they provide. This choice includes whether they wish their personal data to be used for (each) additional purpose or service enhancement.
  8. Data sharing. Children’s personal data should not be shared or disclosed with third parties unless there is a compelling reason to do so, taking account of the best interests of the child. Due diligence checks should be conducted on any third party recipients of children’s data, and assurances should be obtained to ensure that sharing will not be detrimental to the well-being of the child.
  9. Geolocation. Geolocation options should be turned off by default unless there is a compelling reason otherwise, again taking account of the best interests of the child. Online service providers should ensure that the service clearly indicates to child users when location tracking is active. Options which make a child’s location visible to others must default back to “off” at the end of each session.
  10. Parental controls. Age-appropriate information should be provided to the child about parental controls, where provided. If the service allows a parent or caregiver to monitor their child’s online activity or track their location, such monitoring should be made clear to the child through the use of obvious signs. Audio or video materials should also be provided to children and parents about children’s rights to privacy.
  11. Profling. Profiling options must be turned off by default, unless there is a compelling reason for profiling, taking account of the best interests of the child. Profiling is only allowed if there are appropriate measures in place to protect the child from any harmful effects (in particular, being shown content that is detrimental to their health or well-being).
  12. Nudge techniques. Design features that suggest or encourage children to make a particular decision to provide unnecessary personal data, weaken or turn off their privacy protections, or extend their use, should not to be used. By contrast, pro-privacy nudges are permitted, where appropriate.
  13. Connected toys and devices. The Code applies to connected toys and devices, such as talking teddy bears, fitness bands or ‘home hub’ interactive speakers. Providers should provide clear, transparent information about who is processing the personal data and what their responsibilities are at the point of purchase and set up. Connected toys and devices should avoid passive collection of personal data (e.g., when in an inactive “listening mode” listening for key words that could wake the device).
  14. Online tools. Online service providers should provide prominent, age-appropriate and accessible tools to help children exercise their data protection rights and report concerns. The tools should also include methods for tracking the progress of complaints or requests, with clear information provided on response timescales.
  15. Data protection impact assessments (DPIAs). Online service providers that provide services that children may access should undertake a DPIA specifically to assess and mitigate risks to children. Annex C of the Code provides a template DPIA that modifies the ICO’s standard template DPIA to include a section for online service providers to consider each of the 16 standards in the Code.
  16. Governance and accountability. Online service providers should ensure that they have policies and procedures in place that demonstrate how providers comply with data protection obligations and the Code, including data protection training for all staff involved in the design and development of online services likely to be accessed by children.

IoT Update: Flurry of Federal 5G Activity Indicates Important Growth Opportunities for the IoT Ecosystem

From the Federal Communications Commission (“FCC”) to Congress to the White House, the federal government has continued to push the importance of investment and innovation in fifth-generation (“5G”) wireless technology. This push bodes well for the many industries that rely on the Internet of Things (“IoT”), such as transportation, healthcare, and manufacturing—to name a few. As we have previously discussed, 5G deployment is critical for IoT because the IoT ecosystem will rely heavily on the increased speeds and capacity, as well as the reduced latency, that 5G technology will enable. Below we discuss the most recent pushes for 5G developments from federal leadership before surveying key industries in the IoT ecosystem that we expect to benefit from these efforts. Continue Reading

IoT Update: EU Commission Issues Recommendation on Cybersecurity in the Energy Sector

The European Commission (“Commission”) has published a Recommendation on cybersecurity in the energy sector (“Recommendation”). The Recommendation builds on recent EU legislation in this area, including the NIS Directive and EU Cybersecurity Act (see our posts here and here). It sets out guidance to achieve a higher level of cybersecurity taking into account specific characteristics of the energy sector, including the use of legacy technology and interdependent systems across borders.

Continue Reading

EDPB Begins Consultation on New Guidelines on Use of the “Performance of a Contract” GDPR Legal Basis by Online Services

On 9 April 2019, the European Data Protection Board (“EDPB”) adopted new guidelines “on the processing of personal data under Article 6(1)(b) GDPR in the context of the provision of online services to data subjects.”

In general, the GDPR requires that processing of personal data be justified under a legal basis in Article 6 GDPR.  One such legal basis is Article 6(1)(b), which covers data processing that is “necessary for the performance of a contract to which the data subject is party or in order to take steps at the request of the data subject prior to entering into a contract.”  The new EDPB guidelines consider the meaning of this basis, and in particular whether it can be used as the basis for data processing by online services for purposes such as service improvement, fraud prevention, targeted advertising, and service personalization. Continue Reading

FCC Warns Marketers of Video Streaming Devices to Comply with Device Authorization Rules

Earlier this month, the Federal Communications Commission (“FCC”) issued an Enforcement Advisory reminding manufacturers, importers, and retailers of video TV set-top boxes that stream Internet-based content to comply with FCC device authorization rules.  As the FCC noted in its Advisory, violations of these rules can lead to monetary penalties totaling more than $19,000 per day of violation or $147,000 for an ongoing violation.  The Advisory noted that to avoid these penalties, entities that manufacture, import, market or operate these set-top boxes should ensure that they comply with FCC device authorization rules.

Continue Reading

AI Update: EU High-Level Working Group Publishes Ethics Guidelines for Trustworthy AI

On April 8, 2019, the EU High-Level Expert Group on Artificial Intelligence (the “AI HLEG”) published its “Ethics Guidelines for Trustworthy AI” (the “guidance”).  This follows a stakeholder consultation on its draft guidelines published in December 2018 (the “draft guidance”) (see our previous blog post for more information on the draft guidance).  The guidance retains many of the same core elements of the draft guidance, but provides a more streamlined conceptual framework and elaborates further on some of the more nuanced aspects, such as on interaction with existing legislation and reconciling the tension between competing ethical requirements.

According to the European Commission’s Communication accompanying the guidance, the Commission will launch a piloting phase starting in June 2019 to collect more detailed feedback from stakeholders on how the guidance can be implemented, with a focus in particular on the assessment list set out in Chapter III.  The Commission plans to evaluate the workability and feasibility of the guidance by the end of 2019, and the AI HLEG will review and update the guidance in early 2020 based on the evaluation of feedback received during the piloting phase. Continue Reading

ICO opens beta phase of privacy “regulatory sandbox”

On 29 March 2019, the ICO opened the beta phase of the “regulatory sandbox” scheme (the “Sandbox”), which is a new service designed to support organizations that are developing innovative and beneficial projects that use personal data.  The application process for participating in the Sandbox is now open, and applications must be submitted to the ICO by noon on Friday 24 May 2019. The ICO has published on its website a Guide to the Sandbox, which explains the scheme in detail.

The purpose of the Sandbox is to support organizations that are developing innovative products and services using personal data and develop a shared understanding of what compliance looks like in particular innovative areas.  Organizations participating in the Sandbox are likely to benefit from having the opportunity to liaise directly with the regulator on innovative projects with complex data protection issues. The Sandbox will also be an opportunity for market leaders in innovative technologies to influence the ICO’s approach to certain use cases with challenging aspects of data protection compliance or where there is uncertainty about what compliance looks like. Continue Reading

IoT Update: How Smart Cities and Connected Cars May Benefit from Each Other

Innovative leaders worldwide are investing in technologies to transform their cities into smart cities—environments in which data collection and analysis is utilized to manage assets and resources efficiently.  Smart city technologies can improve safety, manage traffic and transportation systems, and save energy, as we discussed in a previous post.  One important aspect of a successful smart city will be ensuring infrastructure is in place to support new technologies.  Federal investment in infrastructure may accordingly benefit both smart cities and smart transportation, as explained in another post on connected and autonomous vehicles (“CAVs”).

Given the growing presence of CAVs in the U.S., and the legislative efforts surrounding them, CAVs are likely to play an important role in the future of smart cities.  This post explores how cities are already using smart transportation technologies and how CAV technologies fit into this landscape.  It also addresses the legal issues and practical challenges involved in developing smart transportation systems.  As CAVs and smart cities continue to develop, each technology can leverage the other’s advances and encourage the other’s deployment.

Continue Reading

AI Update: What Happens When a Computer Denies Your Insurance Coverage Claim?

Artificial intelligence is your new insurance claims agent. For years, insurance companies have used “InsurTech” AI to underwrite risk. But until recently, the use of AI in claims handling was only theoretical. No longer. The advent of AI claims handling creates new risks for policyholders, but it also creates new opportunities for resourceful policyholders to uncover bad faith and encourage insurers to live up to their side of the insurance contract.

Most readers are familiar with Lemonade, the InsurTech start-up that boasts a three-second AI claims review process. However, as noted in a Law360 article last year, Lemonade deferred any potential claim denials for human review, so the prospect of AI bad faith is still untested.  Now it is only a matter of time before insurers face pressure to use the available technology to deny claims as well.

So what happens when a claim is denied?

Continue Reading

LexBlog