AI Update: EU High-Level Working Group Publishes Ethics Guidelines for Trustworthy AI

On April 8, 2019, the EU High-Level Expert Group on Artificial Intelligence (the “AI HLEG”) published its “Ethics Guidelines for Trustworthy AI” (the “guidance”).  This follows a stakeholder consultation on its draft guidelines published in December 2018 (the “draft guidance”) (see our previous blog post for more information on the draft guidance).  The guidance retains many of the same core elements of the draft guidance, but provides a more streamlined conceptual framework and elaborates further on some of the more nuanced aspects, such as on interaction with existing legislation and reconciling the tension between competing ethical requirements.

According to the European Commission’s Communication accompanying the guidance, the Commission will launch a piloting phase starting in June 2019 to collect more detailed feedback from stakeholders on how the guidance can be implemented, with a focus in particular on the assessment list set out in Chapter III.  The Commission plans to evaluate the workability and feasibility of the guidance by the end of 2019, and the AI HLEG will review and update the guidance in early 2020 based on the evaluation of feedback received during the piloting phase. Continue Reading

ICO opens beta phase of privacy “regulatory sandbox”

On 29 March 2019, the ICO opened the beta phase of the “regulatory sandbox” scheme (the “Sandbox”), which is a new service designed to support organizations that are developing innovative and beneficial projects that use personal data.  The application process for participating in the Sandbox is now open, and applications must be submitted to the ICO by noon on Friday 24 May 2019. The ICO has published on its website a Guide to the Sandbox, which explains the scheme in detail.

The purpose of the Sandbox is to support organizations that are developing innovative products and services using personal data and develop a shared understanding of what compliance looks like in particular innovative areas.  Organizations participating in the Sandbox are likely to benefit from having the opportunity to liaise directly with the regulator on innovative projects with complex data protection issues. The Sandbox will also be an opportunity for market leaders in innovative technologies to influence the ICO’s approach to certain use cases with challenging aspects of data protection compliance or where there is uncertainty about what compliance looks like. Continue Reading

IoT Update: How Smart Cities and Connected Cars May Benefit from Each Other

Innovative leaders worldwide are investing in technologies to transform their cities into smart cities—environments in which data collection and analysis is utilized to manage assets and resources efficiently.  Smart city technologies can improve safety, manage traffic and transportation systems, and save energy, as we discussed in a previous post.  One important aspect of a successful smart city will be ensuring infrastructure is in place to support new technologies.  Federal investment in infrastructure may accordingly benefit both smart cities and smart transportation, as explained in another post on connected and autonomous vehicles (“CAVs”).

Given the growing presence of CAVs in the U.S., and the legislative efforts surrounding them, CAVs are likely to play an important role in the future of smart cities.  This post explores how cities are already using smart transportation technologies and how CAV technologies fit into this landscape.  It also addresses the legal issues and practical challenges involved in developing smart transportation systems.  As CAVs and smart cities continue to develop, each technology can leverage the other’s advances and encourage the other’s deployment.

Continue Reading

AI Update: What Happens When a Computer Denies Your Insurance Coverage Claim?

Artificial intelligence is your new insurance claims agent. For years, insurance companies have used “InsurTech” AI to underwrite risk. But until recently, the use of AI in claims handling was only theoretical. No longer. The advent of AI claims handling creates new risks for policyholders, but it also creates new opportunities for resourceful policyholders to uncover bad faith and encourage insurers to live up to their side of the insurance contract.

Most readers are familiar with Lemonade, the InsurTech start-up that boasts a three-second AI claims review process. However, as noted in a Law360 article last year, Lemonade deferred any potential claim denials for human review, so the prospect of AI bad faith is still untested.  Now it is only a matter of time before insurers face pressure to use the available technology to deny claims as well.

So what happens when a claim is denied?

Continue Reading

Senate Reintroduces IoT Cybersecurity Improvement Act

On March 11, 2019, a bipartisan group of lawmakers including Sen. Mark Warner and Sen. Cory Gardner introduced the Internet of Things (IoT) Cybersecurity Improvement Act of 2019. The Act seeks “[t]o leverage Federal Government procurement power to encourage increased cybersecurity for Internet of Things devices.” In other words, this bill aims to shore up cybersecurity requirements for IoT devices purchased and used by the federal government, with the aim of affecting cybersecurity on IoT devices more broadly.

Continue Reading

Net Neutrality Update: House Hearing and Proposed Legislation

Since the Federal Communications Commission (“FCC”) repealed the 2015 net neutrality rules last year, federal and state lawmakers have debated how to address the issue of net neutrality going forward.  We previously have discussed some of the state net neutrality laws that were enacted, including California’s law, which currently is on hold pending the resolution of Mozilla Corp v. FCC, the lawsuit challenging the FCC’s order that repealed net neutrality rules.  Oral argument for this case was held in the U.S. Court of Appeals for the D.C. Circuit on February 1, 2019.

Continue Reading

IoT Update: Covington Hosts First Webinar on Connected and Automated Vehicles

On February 27, 2019, Covington hosted its first webinar in a series on connected and automated vehicles (“CAVs”).  During the webinar, which is available here, Covington’s regulatory and public policy experts covered the current state of play in U.S. law and regulations relating to CAVs.  In particular, Covington’s experts focused on relevant developments in: (1) federal public policy; (2) federal regulatory agencies; (3) state public policy; (4) autonomous aviation; and (5) national security.

Highlights from each of these areas are presented below.

Continue Reading

AI Update: U.S. House Resolution on AI Ethical Development Introduced

On February 27th, Reps. Brenda Lawrence (D-Mich.) and Ro Khanna (D-Calif.) introduced a resolution emphasizing the need to ethically develop artificial intelligence (“AI”). H. RES. 153, titled “Supporting the development of guidelines for ethical development of artificial intelligence,” calls on the government to work with stakeholders to ensure that AI is developed in a “safe, responsible, and democratic” fashion. The resolution has nine Democratic sponsors and was referred to the House Committee on Science, Space, and Technology.

Continue Reading

IoT Update: Covington to Host Webinar on Connected and Automated Vehicles

One week from today, Covington will host its first webinar in a series on connected and automated vehicles (“CAVs”).  The webinar will take place on February 27 from 12 to 1 p.m. Eastern Time. During the webinar, Covington’s regulatory and legislative experts will cover developments in U.S. law and regulations relating to CAVs. Those topics include:

  • Federal regulation affecting CAVs, with a focus on the National Highway Traffic Safety Administration (“NHTSA”), the Federal Aviation Administration (“FAA”), the Federal Communications Commission (“FCC”), and the Committee on Foreign Investment in the United States (“CFIUS”) review.
  • Where Congress stands on CAV legislation, including the AV START Act, the SELF DRIVE Act, and infrastructure legislation.
  • State-level legislative, regulatory, and policy developments, including a closer look at California’s regulations.
  • Updates and trends specific to the autonomous aviation industry.
  • Foreign investment and export controls impacting CAVs.

Continue Reading

Defense Department Releases Artificial Intelligence Strategy

(This article was originally published in Global Policy Watch.)

On February 12, 2019 the Department of Defense released a summary and supplementary fact sheet of its artificial intelligence strategy (“AI Strategy”). The AI Strategy has been a couple of years in the making as the Trump administration has scrutinized the relative investments and advancements in artificial intelligence by the United States, its allies and partners, and potential strategic competitors such as China and Russia. The animating concern was articulated in the Trump administration’s National Defense Strategy (“NDS”): strategic competitors such as China and Russia has made investments in technological modernization, including artificial intelligence, and conventional military capability that is eroding U.S. military advantage and changing how we think about conventional deterrence. As the NDS states, “[t]he reemergence of long-term strategic competition, rapid dispersion of technologies” such as “advanced computing, “big data” analytics, artificial intelligence” and others will be necessary to “ensure we will be able to fight and win the wars of the future.”

The AI Strategy offers that “[t]he United States, together with its allies and partners, must adopt AI to maintain its strategic position, prevail on future battlefields, and safeguard [a free and open international] order. We will also seek to develop and use AI technologies in ways that advance security, peace, and stability in the long run. We will lead in the responsible use and development of AI by articulating our vision and guiding principles for using AI in a lawful and ethical manner.”

DoD will implement the AI Strategy through five main lines of effort:

  • Delivering AI-enabled capabilities that address key missions
  • Scaling AI’s impact across DOD through a common foundation that enables decentralized development and experimentation
  • Cultivating a leading AI workforce
  • Engaging with commercial, academic, and international allies and partners
  • Leading in military ethics and AI safety

The AI Strategy emphasizes that “[f]ailure to adopt AI will result in legacy systems irrelevant to the defense of our people, eroding cohesion among allies and partners, reduced access to markets that will contribute to a decline in our prosperity and standard of living, and growing challenges to societies that have been built upon individual freedoms.”

The Joint Artificial Intelligence Center (“JAIC”), which was established in June 2018, is led by Lt. Gen. Jack Shanahan and reports to the DoD Chief Information Officer Dana Deasy.  It is designated as the principal implementer and integrator of the AI Strategy. Specifically, the JAIC will coordinate activities that align with DoD’s strategic approach, such as: (1) rapidly delivering AI-enabled capabilities; (2) establishing a common foundation for scaling AI’s impact across DoD; (3) facilitating AI planning, policy, governance, ethics, safety, cybersecurity, and multilateral coordination; and (4) attracting and cultivating world-class personnel.

The AI Strategy makes clear that DoD recognizes that “[t]he present moment is pivotal: we must act to protect our security and advance our competiveness, seizing the initiative to lead the world in the development and adoption of transformative defense AI solutions that are safe, ethical, and secure. JAIC will spearhead this effort, engaging with the best minds in government, the private sector, academia, and international community. The speed and scale of the change required are daunting, but we must embrace change if we are to reap the benefits of continued security and prosperity for the future.” Accordingly, Lt. Gen. Shanahan and Dana Deasy, speaking to a group of reporters, highlighted that DoD has recently invested $90 million in AI-related research and technology development, and that DoD will request additional resources for the JAIC in its fiscal year 2020 budget request in order to support its execution of the AI Strategy.

The DoD strategy comes on the heels of President Trump’s Executive Order (“EO”), “Maintaining American Leadership in Artificial Intelligence,” that launches a coordinated federal government strategy for artificial intelligence. The EO directs federal departments and agencies to invest the resources necessary to drive technological breakthroughs in AI (and outpace China’s developments in this area), lead the develop of global technical standards, address workforce issues as industries adopt AI, foster trust in AI technologies, and promote U.S. research and innovation with allies and partners.

LexBlog