IoT Update: Covington Hosts First Webinar on Connected and Automated Vehicles

On February 27, 2019, Covington hosted its first webinar in a series on connected and automated vehicles (“CAVs”).  During the webinar, which is available here, Covington’s regulatory and public policy experts covered the current state of play in U.S. law and regulations relating to CAVs.  In particular, Covington’s experts focused on relevant developments in: (1) federal public policy; (2) federal regulatory agencies; (3) state public policy; (4) autonomous aviation; and (5) national security.

Highlights from each of these areas are presented below.

Continue Reading

AI Update: U.S. House Resolution on AI Ethical Development Introduced

On February 27th, Reps. Brenda Lawrence (D-Mich.) and Ro Khanna (D-Calif.) introduced a resolution emphasizing the need to ethically develop artificial intelligence (“AI”). H. RES. 153, titled “Supporting the development of guidelines for ethical development of artificial intelligence,” calls on the government to work with stakeholders to ensure that AI is developed in a “safe, responsible, and democratic” fashion. The resolution has nine Democratic sponsors and was referred to the House Committee on Science, Space, and Technology.

Continue Reading

IoT Update: Covington to Host Webinar on Connected and Automated Vehicles

One week from today, Covington will host its first webinar in a series on connected and automated vehicles (“CAVs”).  The webinar will take place on February 27 from 12 to 1 p.m. Eastern Time. During the webinar, Covington’s regulatory and legislative experts will cover developments in U.S. law and regulations relating to CAVs. Those topics include:

  • Federal regulation affecting CAVs, with a focus on the National Highway Traffic Safety Administration (“NHTSA”), the Federal Aviation Administration (“FAA”), the Federal Communications Commission (“FCC”), and the Committee on Foreign Investment in the United States (“CFIUS”) review.
  • Where Congress stands on CAV legislation, including the AV START Act, the SELF DRIVE Act, and infrastructure legislation.
  • State-level legislative, regulatory, and policy developments, including a closer look at California’s regulations.
  • Updates and trends specific to the autonomous aviation industry.
  • Foreign investment and export controls impacting CAVs.

Continue Reading

Defense Department Releases Artificial Intelligence Strategy

(This article was originally published in Global Policy Watch.)

On February 12, 2019 the Department of Defense released a summary and supplementary fact sheet of its artificial intelligence strategy (“AI Strategy”). The AI Strategy has been a couple of years in the making as the Trump administration has scrutinized the relative investments and advancements in artificial intelligence by the United States, its allies and partners, and potential strategic competitors such as China and Russia. The animating concern was articulated in the Trump administration’s National Defense Strategy (“NDS”): strategic competitors such as China and Russia has made investments in technological modernization, including artificial intelligence, and conventional military capability that is eroding U.S. military advantage and changing how we think about conventional deterrence. As the NDS states, “[t]he reemergence of long-term strategic competition, rapid dispersion of technologies” such as “advanced computing, “big data” analytics, artificial intelligence” and others will be necessary to “ensure we will be able to fight and win the wars of the future.”

The AI Strategy offers that “[t]he United States, together with its allies and partners, must adopt AI to maintain its strategic position, prevail on future battlefields, and safeguard [a free and open international] order. We will also seek to develop and use AI technologies in ways that advance security, peace, and stability in the long run. We will lead in the responsible use and development of AI by articulating our vision and guiding principles for using AI in a lawful and ethical manner.”

DoD will implement the AI Strategy through five main lines of effort:

  • Delivering AI-enabled capabilities that address key missions
  • Scaling AI’s impact across DOD through a common foundation that enables decentralized development and experimentation
  • Cultivating a leading AI workforce
  • Engaging with commercial, academic, and international allies and partners
  • Leading in military ethics and AI safety

The AI Strategy emphasizes that “[f]ailure to adopt AI will result in legacy systems irrelevant to the defense of our people, eroding cohesion among allies and partners, reduced access to markets that will contribute to a decline in our prosperity and standard of living, and growing challenges to societies that have been built upon individual freedoms.”

The Joint Artificial Intelligence Center (“JAIC”), which was established in June 2018, is led by Lt. Gen. Jack Shanahan and reports to the DoD Chief Information Officer Dana Deasy.  It is designated as the principal implementer and integrator of the AI Strategy. Specifically, the JAIC will coordinate activities that align with DoD’s strategic approach, such as: (1) rapidly delivering AI-enabled capabilities; (2) establishing a common foundation for scaling AI’s impact across DoD; (3) facilitating AI planning, policy, governance, ethics, safety, cybersecurity, and multilateral coordination; and (4) attracting and cultivating world-class personnel.

The AI Strategy makes clear that DoD recognizes that “[t]he present moment is pivotal: we must act to protect our security and advance our competiveness, seizing the initiative to lead the world in the development and adoption of transformative defense AI solutions that are safe, ethical, and secure. JAIC will spearhead this effort, engaging with the best minds in government, the private sector, academia, and international community. The speed and scale of the change required are daunting, but we must embrace change if we are to reap the benefits of continued security and prosperity for the future.” Accordingly, Lt. Gen. Shanahan and Dana Deasy, speaking to a group of reporters, highlighted that DoD has recently invested $90 million in AI-related research and technology development, and that DoD will request additional resources for the JAIC in its fiscal year 2020 budget request in order to support its execution of the AI Strategy.

The DoD strategy comes on the heels of President Trump’s Executive Order (“EO”), “Maintaining American Leadership in Artificial Intelligence,” that launches a coordinated federal government strategy for artificial intelligence. The EO directs federal departments and agencies to invest the resources necessary to drive technological breakthroughs in AI (and outpace China’s developments in this area), lead the develop of global technical standards, address workforce issues as industries adopt AI, foster trust in AI technologies, and promote U.S. research and innovation with allies and partners.

AI Update: President Trump Signs Executive Order on Artificial Intelligence

Today, President Trump signed an Executive Order (“EO”), “Maintaining American Leadership in Artificial Intelligence,” that launches a coordinated federal government strategy for Artificial Intelligence (the “AI Initiative”).  Among other things, the AI Initiative aims to solidify American leadership in AI by empowering federal agencies to drive breakthroughs in AI research and development (“R&D”) (including by making data computing resources available to the AI research community), to establish technological standards to support reliable and trustworthy systems that use AI, to provide guidance with respect to regulatory approaches, and to address issues related to the AI workforce.  The Administration’s EO is the latest of at least 18 other countries’ national AI strategies, and signals that investment in artificial intelligence will continue to escalate in the near future—as will deliberations with respect to how AI-based technologies should be governed.

Continue Reading

IoT Update: Building Out the “Cutting Edge” for an Infrastructure Package

On Tuesday, President Donald Trump used his State of the Union address to reinforce the need for legislation to update the nation’s infrastructure. In the speech, he urged both parties to “unite for a great rebuilding of America’s crumbling infrastructure” and said that he is “eager to work” with Congress on the issue. Significantly, he said that any such measure should “deliver new and important infrastructure investment, including investments in the cutting-edge industries of the future.” He emphasized: “This is not an option. This is a necessity.”

President Trump’s push on infrastructure is particularly noteworthy because infrastructure remains popular in both parties and the new House Congressional leadership has echoed the push for an infrastructure package.

While the State of the Union provided few details about the kinds of “cutting-edge industries” that could be the focus of a bipartisan infrastructure package, three key technologies are likely candidates: 5G wireless, connected and automated vehicles (“CAV”), and smart city technologies. A fact sheet on infrastructure released by the White House after the speech reiterated the call to “invest in visionary products” and emphasized the importance of “[m]astering new technologies” including 5G wireless. Such investments may not only improve “crumbling” infrastructure, but also spur the development of these technologies—and Congress is already holding a series of hearings devoted to identifying infrastructure needs.

Continue Reading

NIST Seeks Comment on Security for IoT Sensor Networks

The National Institute of Standards and Technology (“NIST”) is seeking comments on its draft project on securing sensor networks for the Internet of Things (“IoT”). Organizations and individuals concerned with the security of IoT sensor networks are invited to comment on the draft through March 18, 2019.

Sensor networks are integral parts of many modern industries and critical infrastructure, including the electric grid, healthcare system, environmental protection, and manufacturing. These networks of small devices can detect, analyze, and transmit data, such as by monitoring and reacting to the physical characteristics around them—including temperature, pollution, humidity, and electrical usage. In the electric grid, for example, sensor networks may monitor and control the power generation of distributed resources, such as solar cells owned by consumers. Connected and automated vehicles are increasingly reliant on sensors deployed inside vehicles, and in road infrastructure, which detect and communicate environmental features and hazards to the vehicle. Sensor networks are also increasingly used in medical devices, which can be programmed to monitor an individual’s health condition. They may also monitor properties of critical water supplies, including to determine the presence of minerals or toxins. The accuracy, integrity, and availability of the data being reported and monitored by a sensor network can be critical.

While the NIST project focuses on sensor networks used for building management—for example, systems designed to open and close vents based on temperatures or to stop pulling air into a facility at a certain humidity threshold—NIST expects its work to be “applicable to multiple industry sectors.” According to NIST, the wireless sensor network market was valued at $573 million in 2016 and is projected to increase to at least $1.2 billion by 2023.

The 29-page project on which NIST seeks public comment focuses on the requirements to ensure sensor networks function securely. It identifies threats relevant to each component and technologies that can be used to help improve security. The document also maps the characteristics of such commercial technologies to the NIST Cybersecurity Framework.

NIST has identified four goals for the project:

  • Serve as a building block for sensor networks in general, future IoT projects, or specific sensor network use cases
  • Establish a security architecture to protect a building management system sensor network by using standards and best practices, including the communications
    channel/network used to transmit sensor data to the back-end building control systems (hosts) for processing
  • Explore the cybersecurity controls to promote the reliability, integrity, and availability of building management system sensor networks
  • Exercise/test the cybersecurity controls of the building management system sensor network to verify that they mitigate the identified cybersecurity concerns/risks, and understand the performance implications of adding these controls to the building management system sensor network

Comments are sought through March 18, 2019. Organizations and individuals are invited to submit their comments online or via email.

UK Consumer Protection Regulator (“CMA”) Extracts Undertakings from Social Media Influencers to Increase Transparency in Sponsored Posts

On January 23, 2019, the UK’s Competition and Markets Authority (“CMA”) announced that it had secured undertakings from 16 social media influencers, including well-known names such as Ellie Goulding, Rosie Huntington-Whiteley and Rita Ora, that commit each influencer to increased transparency when they promote or endorse brands or services on social media on behalf of businesses.

The CMA stressed that applicable UK consumer law requires that it be made clear when posts are sponsored (i.e., paid or incentivized).  The CMA also disclosed that it has sent warning letters to other (unidentified) influencers and celebrities, and indicated it will continue to consider the role of social media platforms in this issue.

This enforcement action, together with the CMA’s recent success in court against secondary ticketing website Viagogo, and more recent threat to take Viagogo to court again, is evidence that consumer protection enforcement remains high on the CMA’s agenda.

Below, we summarise key elements of the undertakings in more detail, and also refer to further available UK regulatory guidance on how to advertise on social media.

Continue Reading

AI Update: Jumping to Exclusions: New Law Provides Government-Wide Exclusion Authorities to Address Supply Chain Risks

On the eve of the recent government shutdown over border security, Congress and the President were in agreement on a different issue of national security: mitigating supply chain risk. On December 21, 2018, the President signed into law the Strengthening and Enhancing Cyber-capabilities by Utilizing Risk Exposure Technology Act (the “SECURE Technology Act”) (P.L. 115-390). The Act includes a trio of bills that were designed to strengthen the Department of Homeland Security’s (“DHS”) cyber defenses and mitigate supply chain risks in the procurement of information technology. The last of these three bills, the Federal Acquisition Supply Chain Security Act, should be of particular interest to contractors that procure information technology-related items related to the performance of a U.S. government contract. Among other things, the bill establishes a Federal Acquisition Security Council, which is charged with several functions, including assessing supply chain risk. One function of the Council is to identify, as appropriate, executive agencies to provide common contract solutions to support supply chain risk management activities, such as subscription services or machine-learning-enhanced analysis applications to support informed decision making. The bill also gives the Secretary of DHS, the Secretary of the Department of Defense (“DoD”) and the Director of National Intelligence authority to issue exclusion and removal orders as to sources and/or covered articles based on the Council’s recommendation. Finally, the bill allows federal agencies to exclude sources and/or covered articles deemed to pose a supply chain risk from certain procurements. Continue Reading

IoT Update: Are Wearables Medical Devices Requiring a CE-Mark in the EU?

Wearable watches that help consumers obtain a better understanding of their eating patterns; wearable clothes that send signals to treating physicians; smart watches: they are but a few examples of the increasingly available and increasingly sophisticated “wearables” on the EU market. These technologies are an integrated part of many people’s lives, and in some cases allow healthcare professionals to follow-up on the condition or habits of their patients, often in real-time. How do manufacturers determine what wearables qualify as medical devices? How do they assess whether their devices need a CE-mark? Must they differentiate between the actual “wearable” and the hardware or software that accompanies them? In this short contribution, we briefly analyze some of these questions. The article first examines what “wearables” are, and when they qualify as a medical device under current and future EU rules. It then addresses the relevance of the applicability of EU medical devices rules to these products. The application of these rules is often complex and highly fact-specific. Continue Reading

LexBlog