Artificial intelligence is your new insurance claims agent. For years, insurance companies have used “InsurTech” AI to underwrite risk. But until recently, the use of AI in claims handling was only theoretical. No longer. The advent of AI claims handling creates new risks for policyholders, but it also creates new opportunities for resourceful policyholders to uncover bad faith and encourage insurers to live up to their side of the insurance contract.

Most readers are familiar with Lemonade, the InsurTech start-up that boasts a three-second AI claims review process. However, as noted in a Law360 article last year, Lemonade deferred any potential claim denials for human review, so the prospect of AI bad faith is still untested.  Now it is only a matter of time before insurers face pressure to use the available technology to deny claims as well.

So what happens when a claim is denied?

Ordinarily policyholders, on top of proving that the claimed loss is covered, may assert bad faith. Unlike routine breach of contract claims, a bad faith claim against an insurer is a tort claim based on the insurer’s alleged breach of the duty of good faith and fair dealing. If a policyholder prevails on a bad faith claim, it may be entitled to attorneys’ fees and punitive damages. Bad faith claims provide a counterweight to insurance companies’ information advantages, and can dramatically increase potential damages.

Discovery for Digital Decisionmakers

To prove bad faith, the policyholder usually collects documents and testimony from the responsible claims reviewer. Though the standard for reasonable AI is unsettled, policyholders will likely need to follow an equivalent process. InsurTech claims handling ranges in complexity, so policyholders will face varied challenges in martialing evidence of bad faith.

A basic example is Strawn v. Farmers Insurance Company of Oregon (2013).  In Strawn, the Oregon Supreme Court greenlit a jury award that included $9 million in punitive damages to a class of policyholders challenging Farmers’ “cost containment software program.” Policyholders demonstrated that the program automatically rejected medical claims for costs above the 80th percentile, rather than reasonably assessing claims. In cases like these, a policyholder can simply show that the computer will faithfully apply what is, in essence, a systemic “bad faith” claims rejection system.

Discovery Challenges for Sophisticated AI

Strawn leaves many questions unanswered. The future role of AI is not applying simple formulas, but rather using neural networks to “learn” and reason in ways that their human creators may not fully understand. So the challenge becomes replicating documentation of the AI’s human-like reasoning process.

Policyholders should start by seeking the source code, software specification documents, and experts who can explain how the software was designed to work. For example, in the 2014 case Audatex North America Inc. v. Mitchell Intern., Inc., the Southern District of California granted a plaintiff’s request to obtain source code and related inquiries to help understand the code.

Creative policyholders will then need to devise ways to replicate the AI’s “learned” decision-making process. This might include seeking data on the outcomes of claims processed before the denial at issue, or testing hypothetical claims through the AI system. Depending on how sophisticated the user interface is, discovery may even involve posing inquiries to the AI about the insurer’s goals.

Opportunities for Policyholders

The flip side of that complexity is that bad faith discovery may encourage early cooperation from the insurer. With their technology on the line, insurers may have a heightened incentive to pay what is due or otherwise settle before discovery for several reasons:

  1. Proprietary Code: As AI processes gain sophistication, technology companies must guard their proprietary designs. Insurance companies who give up the underlying code for one claim open themselves to threats of liability to those companies.
  2. Confidentiality: AI technology is only as sophisticated as its data inputs, and the best way to “train” it is to provide data inputs from the insurer’s other claims. This creates a conundrum when the substance of those claims is confidential. An insurance company faces a dilemma if it reveals such information in the course of litigating a claim.
  3. Systemic Bad Faith: Analogous to Strawn, if the acquired code reveals systemic bad faith, an insurer risks invoking dramatically increased liability, like class action litigation.  That would add on to the costly rollback of claims-processing infrastructure and likely outweigh the cost of covering the single claim.

Because of this triple threat to the insurer’s bottom line, the prospect of discovery for a bad faith claim may help policyholders better protect themselves from insurer bad faith going forward. Policyholders should pay careful attention to their insurers and ask questions during underwriting about the claims handling process, with an eye to whether and how AI is used. And if a claim becomes likely, policyholders should carefully assess whether a possible bad faith claim and discovery into InsurTech reasoning provides opportunities to reach a good outcome.