AI Update: What Happens When a Computer Denies Your Insurance Coverage Claim?

Artificial intelligence is your new insurance claims agent. For years, insurance companies have used “InsurTech” AI to underwrite risk. But until recently, the use of AI in claims handling was only theoretical. No longer. The advent of AI claims handling creates new risks for policyholders, but it also creates new opportunities for resourceful policyholders to uncover bad faith and encourage insurers to live up to their side of the insurance contract.

Most readers are familiar with Lemonade, the InsurTech start-up that boasts a three-second AI claims review process. However, as noted in a Law360 article last year, Lemonade deferred any potential claim denials for human review, so the prospect of AI bad faith is still untested.  Now it is only a matter of time before insurers face pressure to use the available technology to deny claims as well.

So what happens when a claim is denied?

Ordinarily policyholders, on top of proving that the claimed loss is covered, may assert bad faith. Unlike routine breach of contract claims, a bad faith claim against an insurer is a tort claim based on the insurer’s alleged breach of the duty of good faith and fair dealing. If a policyholder prevails on a bad faith claim, it may be entitled to attorneys’ fees and punitive damages. Bad faith claims provide a counterweight to insurance companies’ information advantages, and can dramatically increase potential damages.

Discovery for Digital Decisionmakers

To prove bad faith, the policyholder usually collects documents and testimony from the responsible claims reviewer. Though the standard for reasonable AI is unsettled, policyholders will likely need to follow an equivalent process. InsurTech claims handling ranges in complexity, so policyholders will face varied challenges in martialing evidence of bad faith.

A basic example is Strawn v. Farmers Insurance Company of Oregon (2013).  In Strawn, the Oregon Supreme Court greenlit a jury award that included $9 million in punitive damages to a class of policyholders challenging Farmers’ “cost containment software program.” Policyholders demonstrated that the program automatically rejected medical claims for costs above the 80th percentile, rather than reasonably assessing claims. In cases like these, a policyholder can simply show that the computer will faithfully apply what is, in essence, a systemic “bad faith” claims rejection system.

Discovery Challenges for Sophisticated AI

Strawn leaves many questions unanswered. The future role of AI is not applying simple formulas, but rather using neural networks to “learn” and reason in ways that their human creators may not fully understand. So the challenge becomes replicating documentation of the AI’s human-like reasoning process.

Policyholders should start by seeking the source code, software specification documents, and experts who can explain how the software was designed to work. For example, in the 2014 case Audatex North America Inc. v. Mitchell Intern., Inc., the Southern District of California granted a plaintiff’s request to obtain source code and related inquiries to help understand the code.

Creative policyholders will then need to devise ways to replicate the AI’s “learned” decision-making process. This might include seeking data on the outcomes of claims processed before the denial at issue, or testing hypothetical claims through the AI system. Depending on how sophisticated the user interface is, discovery may even involve posing inquiries to the AI about the insurer’s goals.

Opportunities for Policyholders

The flip side of that complexity is that bad faith discovery may encourage early cooperation from the insurer. With their technology on the line, insurers may have a heightened incentive to pay what is due or otherwise settle before discovery for several reasons:

  1. Proprietary Code: As AI processes gain sophistication, technology companies must guard their proprietary designs. Insurance companies who give up the underlying code for one claim open themselves to threats of liability to those companies.
  2. Confidentiality: AI technology is only as sophisticated as its data inputs, and the best way to “train” it is to provide data inputs from the insurer’s other claims. This creates a conundrum when the substance of those claims is confidential. An insurance company faces a dilemma if it reveals such information in the course of litigating a claim.
  3. Systemic Bad Faith: Analogous to Strawn, if the acquired code reveals systemic bad faith, an insurer risks invoking dramatically increased liability, like class action litigation.  That would add on to the costly rollback of claims-processing infrastructure and likely outweigh the cost of covering the single claim.

Because of this triple threat to the insurer’s bottom line, the prospect of discovery for a bad faith claim may help policyholders better protect themselves from insurer bad faith going forward. Policyholders should pay careful attention to their insurers and ask questions during underwriting about the claims handling process, with an eye to whether and how AI is used. And if a claim becomes likely, policyholders should carefully assess whether a possible bad faith claim and discovery into InsurTech reasoning provides opportunities to reach a good outcome.

Senate Reintroduces IoT Cybersecurity Improvement Act

On March 11, 2019, a bipartisan group of lawmakers including Sen. Mark Warner and Sen. Cory Gardner introduced the Internet of Things (IoT) Cybersecurity Improvement Act of 2019. The Act seeks “[t]o leverage Federal Government procurement power to encourage increased cybersecurity for Internet of Things devices.” In other words, this bill aims to shore up cybersecurity requirements for IoT devices purchased and used by the federal government, with the aim of affecting cybersecurity on IoT devices more broadly.

To accomplish this goal, the Act puts forth several action items for the Director of the National Institute of Standards and Technology (“NIST”) and the Office of Management and Budget (“OMB”). Details of these action items and their deadlines are discussed below.

  • NIST is directed to complete, by September 30, 2019, all ongoing efforts related to managing IoT cybersecurity, particularly its work in identifying cybersecurity capabilities for IoT devices. Under the bill, those NIST efforts are to address at least: (i) secure development, (ii) identity management, (iii) patching, and (iv) configuration management for IoT devices.
  • NIST is directed to develop, by March 31, 2020, recommendations on “the appropriate use and management” of IoT devices “owned or controlled by the Federal Government.” These recommendations are expected to include “minimum information security requirements” that address the cybersecurity risks of IoT devices owned or controlled by the federal government. Once these recommendations are issued, OMB will have 180 days to issue guidance to each agency, consistent with NIST’s recommendations.

Additionally, the bill would require NIST to do the following within 180 days of its enactment:

  • Publish a draft report addressing considerations for managing cybersecurity risks associated with the “increasing convergence of traditional Information Technology devices, networks, and systems with Internet of Things devices, networks, and systems and Operational Technology devices, networks and systems.”
  • Consult with cybersecurity researchers and private-industry experts to publish guidance relating to the reporting and resolution of security vulnerabilities discovered in federal government IoT devices.

– OMB will then have 180 days to issue guidelines for each government agency, based on NIST’s recommendations. Those recommendations are required to be consistent with the information security requirements that are imposed on federal information systems in Title 44. OMB’s guidelines are also required to prohibit acquisition or use of IoT devices from a contractor or vendor that fails to comply with NIST’s security vulnerability guidance.

– Once OMB issues its guidance to agencies, these requirements will need to be included in a revision to the Federal Acquisition Regulation (FAR), which governs all federal procurement of goods and services using appropriated funds. No specific date for when these regulations should be promulgated are included in the current draft of the bill.

Notably, the Act also recognizes the debate about what constitutes an “IoT device.” It would apply to a “covered device,” which is defined as a “physical object” that: (1) is capable of connecting to and is in regular connection with the internet, (2) has computer processing capabilities that can collect, send, or receive data; and (3) is not a general-purpose computing device, including personal computing systems, smart mobile communications devices, programmable logic controls, and mainframe computing systems. At the same time, it directs OMB to establish a process for interested parties to petition for a decision that a device is not covered by this definition, potentially providing clarity for makers of devices about whether they are covered by the measure.

This bill follows two failed bills from the last congressional term: the Internet of Things (IoT) Cybersecurity Improvement Act of 2017 and the Internet of Things (IoT) Federal Cybersecurity Improvement Act of 2018. The 2017 and 2018 Acts both focused on “provid[ing] minimal cybersecurity operational standards for Internet-connected devices purchased by Federal agencies.” The prior bills contained only limited guidance to NIST and instead focused on OMB. For example, the 2017 bill required OMB to provide guidelines on specific, enumerated contractual terms in vendor contracts for IoT devices. The 2018 bill directed OMB to consider “voluntary consensus standards” in its promulgation of guidelines on contractual terms.

The current bill also follows increasing efforts by NIST to focus on IoT cybersecurity. Its efforts include development of a “baseline” set of cybersecurity capabilities for IoT devices. NIST announced earlier this month that it is seeking feedback on its proposal, especially insights into identifying those cybersecurity capabilities that could be achieved across the widest set of IoT devices.

Net Neutrality Update: House Hearing and Proposed Legislation

Since the Federal Communications Commission (“FCC”) repealed the 2015 net neutrality rules last year, federal and state lawmakers have debated how to address the issue of net neutrality going forward.  We previously have discussed some of the state net neutrality laws that were enacted, including California’s law, which currently is on hold pending the resolution of Mozilla Corp v. FCC, the lawsuit challenging the FCC’s order that repealed net neutrality rules.  Oral argument for this case was held in the U.S. Court of Appeals for the D.C. Circuit on February 1, 2019.

Continue Reading

IoT Update: Covington Hosts First Webinar on Connected and Automated Vehicles

On February 27, 2019, Covington hosted its first webinar in a series on connected and automated vehicles (“CAVs”).  During the webinar, which is available here, Covington’s regulatory and public policy experts covered the current state of play in U.S. law and regulations relating to CAVs.  In particular, Covington’s experts focused on relevant developments in: (1) federal public policy; (2) federal regulatory agencies; (3) state public policy; (4) autonomous aviation; and (5) national security.

Highlights from each of these areas are presented below.

Continue Reading

AI Update: U.S. House Resolution on AI Ethical Development Introduced

On February 27th, Reps. Brenda Lawrence (D-Mich.) and Ro Khanna (D-Calif.) introduced a resolution emphasizing the need to ethically develop artificial intelligence (“AI”). H. RES. 153, titled “Supporting the development of guidelines for ethical development of artificial intelligence,” calls on the government to work with stakeholders to ensure that AI is developed in a “safe, responsible, and democratic” fashion. The resolution has nine Democratic sponsors and was referred to the House Committee on Science, Space, and Technology.

Continue Reading

IoT Update: Covington to Host Webinar on Connected and Automated Vehicles

One week from today, Covington will host its first webinar in a series on connected and automated vehicles (“CAVs”).  The webinar will take place on February 27 from 12 to 1 p.m. Eastern Time. During the webinar, Covington’s regulatory and legislative experts will cover developments in U.S. law and regulations relating to CAVs. Those topics include:

  • Federal regulation affecting CAVs, with a focus on the National Highway Traffic Safety Administration (“NHTSA”), the Federal Aviation Administration (“FAA”), the Federal Communications Commission (“FCC”), and the Committee on Foreign Investment in the United States (“CFIUS”) review.
  • Where Congress stands on CAV legislation, including the AV START Act, the SELF DRIVE Act, and infrastructure legislation.
  • State-level legislative, regulatory, and policy developments, including a closer look at California’s regulations.
  • Updates and trends specific to the autonomous aviation industry.
  • Foreign investment and export controls impacting CAVs.

Our speakers are:

  • Holly Fechner (Legislative/Public Policy, Former Senate Policy Director)
  • Brian Smith (Public Policy/Aviation, Former White House Counsel’s Office, Former Special Assistant, Department of Labor)
  • Sarah Wilson (Product Liability/Consumer Safety, Former Federal Claims Judge, Former White House Senior Counsel)
  • Jake Levine (State Regulation/Public Policy, Former Senior Counsel to CA State Senator Fran Pavley, Former White House Policy Advisor)
  • Jonathan Wakely (CFIUS/International Trade, Former CIA Political Analyst)

You can register for the webinar here.

Please check back here for details on our next webinar in this series: Leveraging AV Data in a Connected World.

This blog is part of Covington’s CAV series, which covers developments across the globe. Other recent posts include:

Defense Department Releases Artificial Intelligence Strategy

(This article was originally published in Global Policy Watch.)

On February 12, 2019 the Department of Defense released a summary and supplementary fact sheet of its artificial intelligence strategy (“AI Strategy”). The AI Strategy has been a couple of years in the making as the Trump administration has scrutinized the relative investments and advancements in artificial intelligence by the United States, its allies and partners, and potential strategic competitors such as China and Russia. The animating concern was articulated in the Trump administration’s National Defense Strategy (“NDS”): strategic competitors such as China and Russia has made investments in technological modernization, including artificial intelligence, and conventional military capability that is eroding U.S. military advantage and changing how we think about conventional deterrence. As the NDS states, “[t]he reemergence of long-term strategic competition, rapid dispersion of technologies” such as “advanced computing, “big data” analytics, artificial intelligence” and others will be necessary to “ensure we will be able to fight and win the wars of the future.”

The AI Strategy offers that “[t]he United States, together with its allies and partners, must adopt AI to maintain its strategic position, prevail on future battlefields, and safeguard [a free and open international] order. We will also seek to develop and use AI technologies in ways that advance security, peace, and stability in the long run. We will lead in the responsible use and development of AI by articulating our vision and guiding principles for using AI in a lawful and ethical manner.”

DoD will implement the AI Strategy through five main lines of effort:

  • Delivering AI-enabled capabilities that address key missions
  • Scaling AI’s impact across DOD through a common foundation that enables decentralized development and experimentation
  • Cultivating a leading AI workforce
  • Engaging with commercial, academic, and international allies and partners
  • Leading in military ethics and AI safety

The AI Strategy emphasizes that “[f]ailure to adopt AI will result in legacy systems irrelevant to the defense of our people, eroding cohesion among allies and partners, reduced access to markets that will contribute to a decline in our prosperity and standard of living, and growing challenges to societies that have been built upon individual freedoms.”

The Joint Artificial Intelligence Center (“JAIC”), which was established in June 2018, is led by Lt. Gen. Jack Shanahan and reports to the DoD Chief Information Officer Dana Deasy.  It is designated as the principal implementer and integrator of the AI Strategy. Specifically, the JAIC will coordinate activities that align with DoD’s strategic approach, such as: (1) rapidly delivering AI-enabled capabilities; (2) establishing a common foundation for scaling AI’s impact across DoD; (3) facilitating AI planning, policy, governance, ethics, safety, cybersecurity, and multilateral coordination; and (4) attracting and cultivating world-class personnel.

The AI Strategy makes clear that DoD recognizes that “[t]he present moment is pivotal: we must act to protect our security and advance our competiveness, seizing the initiative to lead the world in the development and adoption of transformative defense AI solutions that are safe, ethical, and secure. JAIC will spearhead this effort, engaging with the best minds in government, the private sector, academia, and international community. The speed and scale of the change required are daunting, but we must embrace change if we are to reap the benefits of continued security and prosperity for the future.” Accordingly, Lt. Gen. Shanahan and Dana Deasy, speaking to a group of reporters, highlighted that DoD has recently invested $90 million in AI-related research and technology development, and that DoD will request additional resources for the JAIC in its fiscal year 2020 budget request in order to support its execution of the AI Strategy.

The DoD strategy comes on the heels of President Trump’s Executive Order (“EO”), “Maintaining American Leadership in Artificial Intelligence,” that launches a coordinated federal government strategy for artificial intelligence. The EO directs federal departments and agencies to invest the resources necessary to drive technological breakthroughs in AI (and outpace China’s developments in this area), lead the develop of global technical standards, address workforce issues as industries adopt AI, foster trust in AI technologies, and promote U.S. research and innovation with allies and partners.

AI Update: President Trump Signs Executive Order on Artificial Intelligence

Today, President Trump signed an Executive Order (“EO”), “Maintaining American Leadership in Artificial Intelligence,” that launches a coordinated federal government strategy for Artificial Intelligence (the “AI Initiative”).  Among other things, the AI Initiative aims to solidify American leadership in AI by empowering federal agencies to drive breakthroughs in AI research and development (“R&D”) (including by making data computing resources available to the AI research community), to establish technological standards to support reliable and trustworthy systems that use AI, to provide guidance with respect to regulatory approaches, and to address issues related to the AI workforce.  The Administration’s EO is the latest of at least 18 other countries’ national AI strategies, and signals that investment in artificial intelligence will continue to escalate in the near future—as will deliberations with respect to how AI-based technologies should be governed.

Continue Reading

IoT Update: Building Out the “Cutting Edge” for an Infrastructure Package

On Tuesday, President Donald Trump used his State of the Union address to reinforce the need for legislation to update the nation’s infrastructure. In the speech, he urged both parties to “unite for a great rebuilding of America’s crumbling infrastructure” and said that he is “eager to work” with Congress on the issue. Significantly, he said that any such measure should “deliver new and important infrastructure investment, including investments in the cutting-edge industries of the future.” He emphasized: “This is not an option. This is a necessity.”

President Trump’s push on infrastructure is particularly noteworthy because infrastructure remains popular in both parties and the new House Congressional leadership has echoed the push for an infrastructure package.

While the State of the Union provided few details about the kinds of “cutting-edge industries” that could be the focus of a bipartisan infrastructure package, three key technologies are likely candidates: 5G wireless, connected and automated vehicles (“CAV”), and smart city technologies. A fact sheet on infrastructure released by the White House after the speech reiterated the call to “invest in visionary products” and emphasized the importance of “[m]astering new technologies” including 5G wireless. Such investments may not only improve “crumbling” infrastructure, but also spur the development of these technologies—and Congress is already holding a series of hearings devoted to identifying infrastructure needs.

Continue Reading

NIST Seeks Comment on Security for IoT Sensor Networks

The National Institute of Standards and Technology (“NIST”) is seeking comments on its draft project on securing sensor networks for the Internet of Things (“IoT”). Organizations and individuals concerned with the security of IoT sensor networks are invited to comment on the draft through March 18, 2019.

Sensor networks are integral parts of many modern industries and critical infrastructure, including the electric grid, healthcare system, environmental protection, and manufacturing. These networks of small devices can detect, analyze, and transmit data, such as by monitoring and reacting to the physical characteristics around them—including temperature, pollution, humidity, and electrical usage. In the electric grid, for example, sensor networks may monitor and control the power generation of distributed resources, such as solar cells owned by consumers. Connected and automated vehicles are increasingly reliant on sensors deployed inside vehicles, and in road infrastructure, which detect and communicate environmental features and hazards to the vehicle. Sensor networks are also increasingly used in medical devices, which can be programmed to monitor an individual’s health condition. They may also monitor properties of critical water supplies, including to determine the presence of minerals or toxins. The accuracy, integrity, and availability of the data being reported and monitored by a sensor network can be critical.

While the NIST project focuses on sensor networks used for building management—for example, systems designed to open and close vents based on temperatures or to stop pulling air into a facility at a certain humidity threshold—NIST expects its work to be “applicable to multiple industry sectors.” According to NIST, the wireless sensor network market was valued at $573 million in 2016 and is projected to increase to at least $1.2 billion by 2023.

The 29-page project on which NIST seeks public comment focuses on the requirements to ensure sensor networks function securely. It identifies threats relevant to each component and technologies that can be used to help improve security. The document also maps the characteristics of such commercial technologies to the NIST Cybersecurity Framework.

NIST has identified four goals for the project:

  • Serve as a building block for sensor networks in general, future IoT projects, or specific sensor network use cases
  • Establish a security architecture to protect a building management system sensor network by using standards and best practices, including the communications
    channel/network used to transmit sensor data to the back-end building control systems (hosts) for processing
  • Explore the cybersecurity controls to promote the reliability, integrity, and availability of building management system sensor networks
  • Exercise/test the cybersecurity controls of the building management system sensor network to verify that they mitigate the identified cybersecurity concerns/risks, and understand the performance implications of adding these controls to the building management system sensor network

Comments are sought through March 18, 2019. Organizations and individuals are invited to submit their comments online or via email.

LexBlog