IoT Update: NTIA Requests Comments Regarding International Internet Policy

Earlier this week, the National Telecommunications and Information Administration (NTIA), the executive branch agency responsible for telecommunications and information policy, released a Notice of Inquiry requesting that any interested party—including the private sector, technical experts, academics, and civil society—help the agency determine its international internet policy priorities. In particular, NTIA is seeking comments and recommendations regarding four topics: (1) the free flow of information and jurisdiction, (2) the multistakeholder approach to Internet governance, (3) privacy and security, and (4) emerging technologies and trends.

Continue Reading

IoT Update: Federal Appeals Courts Split on Forensic Searches of Devices Seized at Border

Two federal appellate courts are taking sharply different views on whether—and why—government agents must have some amount of suspicion to conduct forensic searches of electronic devices seized at the border.

The Fourth Circuit on May 9, 2018, held that government agents must have reasonable suspicion to conduct forensic searches of cell phones seized at the border.  It said that decision was based on the Supreme Court’s recognition in Riley v. California that phones contain information with a “uniquely sensitive nature.”  The Fourth Circuit and Ninth Circuit are the only two federal appellate courts to require reasonable suspicion for forensic border searches.

In contrast, the Eleventh Circuit on May 23, 2018, rejected that position—and held that no suspicion is required for forensic border searches of electronic devices.  According to the Eleventh Circuit, even after Riley, “it does not make sense to say that electronic devices should receive special treatment because so many people now own them or because they can store vast quantities of records or effects.”

The decisions evince a split in how far courts are willing to apply Riley, including whether that decision has any bearing on border searches, which are a narrow exception to the Fourth Amendment’s warrant requirement.

Fourth Circuit: Riley Applies to Border Searches

In United States v. Kolsuz, the Fourth Circuit analyzed the reasonableness of a forensic search of the cell phone of a Turkish national traveling out of Dulles International Airport who was detained after agents located unlicensed firearms in his luggage.

Kolsuz’s phone was seized at the airport and driven to an off-site facility, where agents used an extraction program that took “a full month, and yielded an 896-page report” about the phone’s contents, according to the court.  That report included Kolsuz’s personal contact lists, emails, messenger conversations, photographs, videos, calendar, web browsing history, and call logs, along with a history of Kolsuz’s physical location down to precise GPS coordinates, the court said.  Notably, the phone remained in airplane mode during the extraction, so that the forensic program obtained only data stored on the phone itself and not data stored remotely in the cloud.

The Fourth Circuit held this was a “border” search, even though it was conducted several miles from the airport after Kolsuz was in custody.  Because the government invoked the border exception in investigating the “transnational offense” of firearms trafficking, the court held there was a “direct link” to the border search rationale, unlike cases in which the government seeks to invoke the border exception “on behalf of its generalized interest in law enforcement and combatting crime.”

The court next addressed the level of suspicion required to conduct a forensic search of an electronic device seized at the border.  It held that “[a]fter Riley, . . . a forensic search of a digital phone must be treated as nonroutine border search, requiring some form of individualized suspicion.”  According to the Fourth Circuit, the “key to Riley’s reasoning is its express refusal to treat such phones as just another form of container, like the wallets, bags, address books, and dairies covered by the search incident [to arrest] exception.”  Given that refusal, the court held that “cell phones are fundamentally different . . . from other objects subject to government searches.”

Eleventh Circuit:  Riley Does Not Apply to Border Searches

In United States v. Touset, the Eleventh Circuit rejected this reasoning.  Touset involved the forensic search of two laptops, two hard drives, and two tablets seized at the border after a U.S. citizen arrived at Atlanta’s Hartsfield-Jackson International Airport.  The forensic searches revealed child pornography on two laptops and the two hard drives—although the court does not explain how those forensic searches were conducted.

According to the Eleventh Circuit, “the Fourth Amendment does not require any suspicion for forensic searches of electronic devices at the border.”  That is because the Supreme Court has afforded greater protection to persons than to property and does not distinguish between searches of “different types of property,” the court said.  It held there was “no reason why the Fourth Amendment would require suspicion for a forensic search of electronic device when it imposes no such requirement for a search of other personal property.”

To reach that conclusion, the Eleventh Circuit relied on its March 2018 decision in United States v. Vergara, which held that Riley does not apply to border searches because that decision was limited to the search-incident-to-arrest doctrine.  (Vergara did not address the issue of what level of suspicion was required, because the defendant in that case only argued a warrant was needed—and the court held it was not.)  It also distinguished Riley by finding that the rationales supporting the border exception still had force when applied to digital information—unlike the rationales supporting the search-incident-to-arrest exception.

Indeed, the Eleventh Circuit suggested that “if we were to require reasonable suspicion for searches of electronic devices, we would create special protection for the property most often used to store and disseminate child pornography.”  It found “no reason” to “create a special rule that will benefit offenders who now conceal contraband in a new type of property.”

Effect Unclear Given CBP Guidance

The practical implications of these cases are not yet clear—particularly because U.S. Customs and Border Protection in January issued guidance requiring reasonable suspicion for forensic searches of electronic devices seized at the border.  Given that guidance (summarized in our prior post), it is possible that agents may conduct fewer forensic searches without reasonable suspicion, reducing the frequency with which this issue is litigated.  Still, because the guidance contains an exception allowing for suspicionless forensic searches in cases of “national security concern,” the issue may arise more frequently in that particular context.

IoT Update: Congress Hears Testimony on IoT Legislation

The House Energy and Commerce Committee’s Subcommittee on Digital Commerce and Consumer Protection held a hearing this week to discuss the State of Modern Application, Research, and Trends of IoT Act (SMART IoT Act). This proposed legislation would direct the Secretary of Commerce to conduct a comprehensive study of the IoT industry and Federal agencies with jurisdiction over the IoT industry, as well as all IoT regulations and policies implemented by those agencies. The SMART IoT Act would also require the Secretary of Commerce to produce a report to Congress within one year of the bill’s enactment, detailing the results of the study and recommendations for enabling the secure growth of IoT.  Although this legislation has not yet been formally introduced, the Subcommittee on Digital Commerce and Consumer Protection has published the bill’s full text as well as a summary.

Three witnesses testified:

  • Tim Day, Senior Vice President, Chamber Technology Engagement Center, U.S. Chamber of Commerce;
  • Michelle Richardson, Deputy Director, Freedom, Security, and Technology Project, Center for Democracy and Technology; and
  • Dipti Vachani, Vice President, Internet of Things Group, General Manager, Platform Management and Customer Engineering, Intel Corporation

At the hearing, the SMART IoT Act drew broad support from all of the witnesses as well as from members on both sides of the aisle. However, there were differing opinions regarding the focus of the study called for by the SMART IoT Act, as well as the next steps that Congress should take with respect to IoT. Continue Reading

Covington CleanEquity Conversations: AI and IoT – Benefits, Risks, and the Role of Regulation.

On March 8-9, 2018, a bespoke group of approximately 200 leading entrepreneurs, investors and advisors focused on deploying and commercializing cutting edge technologies gathered in Monte Carlo from across the globe for the 11th annual CleanEquity® Monaco Conference.  Complementing other plenary sessions and emerging company presentations, the conference initiated a new feature — Covington CleanEquity Conversations — intended to capture and memorialise the unique thought leadership opportunity presented by the gathering in Monaco. On the first day, conference participants separated into three breakout groups for Chatham House Rule discussions curated by partners from the international law firm Covington & Burling LLP of three critical issues confronting cleantech deployment and commercialisation:

  • AI and IoT – Benefits, Risks, and the Role of Regulation
  • Sustainability – What goals should businesses prioritise and what are the right metrics?
  • Will market driven innovation alone save us from climate change?

On the second day, the Covington team reported during the conference’s final plenary session key takeaways from the three breakout group discussions.  Covington and CleanEquity organizer and specialist investment bank, Innovator Capital, are pleased to share brief summaries of the thought leadership developed by the proceedings of conference participants on each of the three topics.

________________________________________________________

AI and IoT –  Benefits, Risks, and the Role of Regulation

  • Rapid evolution and proliferation of artificial intelligence and the Internet of Things holds tremendous promise for dramatic, transformational efficiency gains in nearly every industry.
  • At the same time these technologies present risks of massive employment disruption, losses of privacy and yielding of human free will to decisions made by algorithms and machines.

In a session led by Covington’s Corporate Partner Simon Amies, conference participants examined these propositions and then considered two questions:  Where should regulation step in?  Can regulation be effective to manage the risks without diminishing the benefits?

The Benefits

There was universal agreement that evolution and proliferation in AI and the Internet of Things have the potential to bring transformational efficiency gains across virtually all sectors of industry.  AI has already transformed business models in the technology sector through the deployment of sophisticated algorithms to process vast quantities of data, and machine learning and automation are already being utilized on a large scale in other areas of industry, revolutionizing processes and delivering significant efficiency gains.

A number of the presenting companies noted that artificial intelligence already plays a pivotal role in their businesses, with some utilizing the technology at the heart of their business model — one company uses its machine learning system to manage and optimize grid operations — and others use AI as a tool to enhance research and refine product development.  One participant flagged the fundamental change to supply chain dynamics and manufacturing processes with the emergence of the smart factory in the Industry 4.0 model, leading to increased efficiency, reduced costs and maximization of resources.  Mass-customisation of lower-cost goods manufactured to order in close proximity to the market bring reduced shipping costs and lead times.

The Risks

But as often happens with the adoption of disruptive technology, new and often unforeseen risks and challenges emerge.

One participant noted concerns surrounding access to, control and ownership of personal data in the field of healthcare with the focus on development of personalized and precision medicines.  Another flagged how personal data could be used by employers to make hiring decisions or by insurers to price auto or life policies, but without explicit consent from the individuals.

One participant pointed to the safety and security concerns of having automated intelligent systems replace humans at the controls of cars and other machines and equipment.  In the case of the autonomous vehicle, who is responsible in the event of an accident where the system makes a conscious decision which turns out to be the wrong one, causing a fatality?  Another participant identified the risks of malicious attackers disrupting or asserting control of systems run by AI and IoT, whether on an industrial scale or on a micro level seeking to take advantage of one individual.

The threat of AI and IoT to jobs was also highlighted.  Many jobs that have kept the workforce occupied for generations could become redundant almost overnight as businesses look to adopt technologies that bring gains in efficiency and productivity and at the same time reduce labour costs.  The labour market is predicted to encounter massive change of a scale not seen since the Industrial Revolution which will have consequent effects on wealth inequality and potentially global stability. While governments and policy makers are likely to take steps to protect jobs, there will be increasing demand for skilled technicians capable of supporting digital capabilities.

The Role of Regulation

The discussion then focused on the two key questions of (a) where should regulation step in and (b) can regulation be effective to manage the risks without diminishing the benefits?

The first observation was that against the backdrop of recent high profile data breaches and the imminent deadline for implementation of the EU’s General Data Protection Regulation, regulation is appropriate and has an important role in managing the risks presented by AI and IoT.  Data privacy legislation has continually evolved since the emergence of the internet, adapting and reacting to the challenges associated with mass collection, use and storage of personal data to ensure privacy, security and transparency.  Privacy laws already apply to AI systems that process personal data, which means new systems need to be designed adhering to these standards where applicable.

One participant commented that it should not be left to the law-makers to ensure risks are adequately legislated against. There is also a role for participants in the market, particularly large corporations, to ensure responsible and fair practices are followed through the adoption of codes of best practice reflecting key ethical principles.  It was noted that Microsoft had established six ethical principles to guide the development and use of artificial intelligence — AI systems should be fair, reliable and safe, private and secure, inclusive, transparent, and accountable.[1]

In discussing the risks of autonomous vehicles, one participant noted that current product liability laws would apply, meaning that claims may exist where loss is caused by a vehicle that is found to be defective or unsafe.  It is likely that these laws will evolve to clarify where responsibility lies, and manufacturers and insurers will look to law-makers to set down standards on how autonomous systems that control driverless vehicles should operate in specific situations rather than make these decisions for themselves.

It was noted that the adoption of standards and regulations for AI and IoT would need to be consistent and coordinated on a global level.  International policy-makers such as the Organisation for Economic Co-operation and Development will need to develop standards that will be accepted universally.  With an increasingly fierce arms-race developing between developed nations to be the economic leader in AI and IoT, this will be challenging.

The final point tackled by the group was the need for employment laws to evolve to recognize the changes in employment practices that are likely to flow from the move to automated systems.  Current employment laws are based around the model of employers employing workers at specific worksites, whereas people are increasingly engaged through remote, part-time or project-based work.  As jobs are displaced through adoption of AI and IoT, new skilled roles will be created to develop, monitor and manage the new systems.  Governments will have an important role in ensuring that the education curriculum adapts to ensure students acquire the necessary skills required to support digital capabilities.

[1] Microsoft. 2018. The Future Computed – Artificial Intelligence and its role in societyhttps://blogs.microsoft.com/uploads/2018/02/The-Future-Computed_2.8.18.pdf

IoT Update: China Releases National Automatic Vehicle Road Testing Rules

In April 2018, China released its nationwide automatic vehicle road testing rules, the Intelligent Internet-connected Vehicles Road Test Administrative Rules (for Trial Implementation) (the “National Rules”), which took effect on May 1, 2018. “Intelligent Internet-connected vehicles,” as defined under the National Rules, are commonly referred to as “intelligent vehicles” or “autonomous vehicles,” which involve a system of advanced sensors, controllers, actuators, etc. that may ultimately become a substitute for human drivers. The National Rules governs three categories of autonomous vehicles depending on the level of automation and human interaction required, i.e., conditional automation, high-level automation and full automation.

Prior to the release of the national Rules, selected Chinese cities including Beijing, Shanghai, Baoding and Chongqing had already implemented their own respective local road test rules for autonomous vehicles, and Shenzhen’s local proposals were at public consultation phase. The National Rules are largely consistent with the already existing various local rules, and provide an example for additional local governments to formulate their own detailed implementation rules. Continue Reading

IoT Update: Will California’s New Autonomous Vehicles Regulations Provide a Roadmap for a National Regulatory Framework on Driverless Cars?

On April 6th, the California Public Utilities Commission (CPUC) issued a Proposed Decision authorizing pilot testing for autonomous vehicles (AVs) in California. This action follows up on the California DMV’s permitting rules for AVs in California, which would have allowed driverless testing and deployment permits to issue as early as April 2 of this year. The DMV’s action was big news when it broke at the end of February; it meant that AVs could be deployed without any human in the vehicle. Now, the CPUC has proposed a pilot to allow the use of driverless test vehicles with passengers inside as soon as this summer.

While shared and electric mobility has already been deployed at scale, the road ahead for autonomy is still evolving. California is working to tackle this third pillar, and prior to the CPUC’s Proposed Decision, companies like Uber and GM Cruise had urged the Commission to move forward to enable the use of AVs for passenger transportation under existing regulatory frameworks. Lyft encouraged the Commission to address AVs in a rulemaking, noting that it “ma[de] little sense” to wait for Congress to act, or to “scramble” to regulate after AVs are already deployed en masse.

But now that the Proposed Decision has been published, stakeholders need to make sense of it.

Continue Reading

Covington IoT Update: U.S. Legislative Roundup on IoT

As policymakers weigh the many policy implications associated with the Internet of Things (“IoT”), U.S. lawmakers have put forward a variety of proposals for studying—and regulating—IoT devices. Although the likelihood of current proposals becoming law this term remain uncertain at best, existing legislative proposals provide important context and insight into the ways that lawmakers view IoT and the government’s role in fostering and regulating the technology.

Below, we summarize five draft bills in the U.S. that approach IoT from different perspectives—including seeking to develop IoT technologies, imposing contractual requirements on companies that provide IoT devices to the government, regulating specific security standards, and creating new resources for consumers to better understand the security and reliability of their IoT devices.

Developing Innovation and Growing the Internet of Things (“DIGIT”) Act

The DIGIT Act was introduced in the Senate (S. 88) and the House (H.R. 686) in January 2017 to foster the development of IoT technologies. The Act was passed by the Senate in August 2017 on a voice vote, but has stalled in the House. The measure would direct the Secretary of Commerce to convene a “working group of Federal stakeholders” to create recommendations and a report to Congress on IoT. The working group would:

  • Identify any federal regulations, statutes, grant practices, budgetary or jurisdiction challenges, and other sector-specific policies that are inhibiting or could inhibit the development of IoT;
  • Consider policies or programs to improve federal agency coordination on IoT;
  • Consider any findings or recommendations made by a new steering committee (described below) and act to implement those recommendations where appropriate; and
  • Examine how federal agencies can benefit from, currently use, and are prepared to adopt IoT, including any additional security measures that may be needed for IoT adoption by the federal government.
  • The Act would also create a new steering committee of non-federal-government representatives, tasked with advising the working group about issues including the availability of adequate spectrum, international proceedings relating to IoT, and policies and programs affecting individual privacy and critical infrastructure protection.

The DIGIT Act also would require the Federal Communications Commission (“FCC”), in consultation with the National Telecommunications and Information Administration (“NTIA”), to issue a notice of inquiry seeking public comment on current and future spectrum needs relating to the IoT, including regulatory barriers to necessary spectrum, the role of licensed and unlicensed spectrum in the IoT, and whether adequate spectrum is currently available.

Internet of Things Cybersecurity Improvement Act of 2017

This bill focuses on IoT devices purchased by the U.S. Government—and mandates specific contractual provisions agencies are to include in any contract for such devices. It was introduced in the Senate (S. 1691) in August 2017.

The measure requires the Director of the Office of Management and Budget (“OMB”) to issue guidelines with specific contractual clauses for each executive agency to require in contracts for the acquisition of internet-connected devices. These contractual provisions would require:

  • Written certification by the contractor that the device:
    • does not contain any known security vulnerability or defect;
    • relies on software capable of being updated by the vendor;
    • uses only non-deprecated industry standard protocols for communication, encryption, and internet connection; and
    • does not contain fixed or hard-coded credentials used for remote administration.
  • Notification by the contractor to the purchasing agency of any known vulnerabilities or defects subsequently disclosed or discovered;
  • The device to be updated or replaced to allow for patches or repair;
  • The provision of repair or a replacement device in a timely manner with respect to any new vulnerability discovered (if it cannot be patched or remediated); and
  • The provision of information about how the device receives security updates, the timeline for ending security support, formal notice when security support has ceased, and other information recommended by the NTIA.

The bill provides exceptions for devices with limited data processing and functionality where security would be “unfeasible” or “economically impractical.” In certain cases, it also allows agencies to rely on compliance with existing third-party or agency security standards in lieu of these requirements, when the other standards provide an equivalent level of security.

Securing the IoT Act of 2017

This measure, introduced in the House in March 2017 (H.R. 1324), is a targeted bill that would require the FCC to establish cybersecurity standards that radio frequency equipment must meet throughout its lifecycle (design, installation, and retirement) in order to be certified under the FCC’s technical standards for equipment authorization.

Cyber Shield Act of 2017

This consumer-focused bill, introduced in the House (H.R. 4163) and Senate (S. 2020) in October 2017, would create a voluntary labeling and “grading” system for IoT devices. Specifically, it directs the Secretary of Commerce to establish a voluntary program to “identify and certify covered products with superior cybersecurity and data security through voluntary certification and labeling.” Under this program, products may be given grades that “display the extent to which a product meets the industry-leading cybersecurity and data security benchmarks.”

As part of the program, the Secretary of Commerce is also directed to establish and maintain cybersecurity and data security benchmarks, by convening and consulting interested parties and federal agencies.

The IOT Consumer Tips to Improve Personal Security Act of 2017

This consumer-focused measure, introduced in the Senate in December 2017 (S. 2234) would require the Federal Trade Commission to develop cybersecurity resources for consumer education and awareness regarding the purchase and use of IoT devices. These resources are to be technology-neutral and are to include guidance, best practices, and advice for consumers to protect against, mitigate, and recover from cybersecurity threats or security vulnerabilities.

Covington Artificial Intelligence Update: European Commission Publishes Communication on Artificial Intelligence for Europe

On April 25, 2018, the European Commission (EC) published its “Artificial Intelligence for Europe” communication (the Communication), in which it sets out a roadmap for its AI initiatives. Having acknowledged the crucial need for a boost of AI in the EU, the EC commits to supporting investment, (re)considering legislation and soft law initiatives, and coordinating Member States’ efforts. This blog post highlights some of the EC’s initiatives. Continue Reading

Covington IoT Update: Mobile Phone Manufacturer Settles with FTC Over Allegations that Its Vendor Collected Personal Data without Consent

Mobile phone manufacturer BLU Products, Inc. entered into a settlement agreement with the FTC last week to resolve allegations that one of BLU’s China-based vendors collected personal information about its consumers without proper consent.

The settlement agreement, which took the form of a consent order, applies not only to BLU but also to its CEO and any other companies he owns and controls.  It requires that the company clarify its disclosures regarding customer Continue Reading

U.S. Patent and Trademark Office Releases Memorandum on Recent Subject Matter Eligibility Decisions

On April 2, 2018, the U.S. Patent and Trademark Office released a memorandum to the Patent Examining Corps regarding recent subject matter eligibility decisions issued by the Federal Circuit. The memorandum discusses two recent decisions that found claims that improve computer technology are directed to patent-eligible subject matter rather than to an ineligible abstract idea. The memorandum and decisions are instructive for practitioners who draft patent applications, confront subject matter eligibility challenges or respond to USPTO rejections under 35 U.S.C. § 101.

In Finjan, Inc. v. Blue Coat Systems, Inc., 879 F.3d 1299 (Fed. Cir. 2018), the Court (Dyk, Linn, Hughes) found no error in the district court’s subject matter eligibility determination and unanimously held that the claims were patent-eligible under § 101 because they improved computer technology by protecting users against previously unknown viruses and enabled more flexible virus filtering. The invention recited specific steps to accomplish the desired result, and was a non-abstract improvement over traditional computer functionality and virus scanning techniques which only recognized the presence of previously-identified viruses.

Relying on recent Federal Circuit precedent, the Court stated that in cases involving software inventions, the inquiry into whether the claims are directed to an abstract idea often turns on whether the claims focus on a specific asserted improvement in computer capabilities. The claims at issue in Finjan are directed to a method of providing computer security by scanning a downloadable program for suspicious code such as viruses, and attaching the results of the scan to the downloadable program in the form of a security profile. The Court adopted a district court claim construction in finding that the behavior-based virus scan approach improved computer functionality because it determines whether the program performs hostile or potentially hostile operations.

Continue Reading

LexBlog