AI in Claims Processing: What It Means for Bad Faith Litigation

Christopher Jones | Sands Anderson

Many insurance companies are moving quickly to incorporate advanced forms of AI into their processes.  Last year, 90% reported they were evaluating whether and how to utilize generative AI, and more than half said they had already begun implementing it.  Claims handling is a primary focus of these efforts.  

There is a large potential upside to involving AI in claims handling.  Foremost is the possibility of a massive boost to claims processing times, which can lead to improvements in customer satisfaction.  This potential for increased productivity is paired with the potential for large scale cost reductions.  Further possible efficiencies include the heightened ability to detect fraud.

But do these improvements bring new risk?  Insurers have critical — if not always well-defined — responsibilities to their insureds, including the duty not to act in bad faith.  Insurance is one of the most heavily scrutinized, thoroughly regulated, and frequently litigated of all industries, and bad faith claims account for a large portion of this activity.  What does the incorporation of AI into the claims handling process mean for the future of bad faith claims?

1. AI in Claims Handling Does Not Lower an Insurer’s Bad Faith Obligations

One of the first and most important things to understand about using AI for claims handling is that it does not change the applicable bad faith standard.  In most states, bad faith claims turn on whether the insurer acted reasonably.  This question is answered by examining issues such as whether an insurer properly investigated a claim, provided clear communications, or avoided making arbitrary decisions.  An insurer can use AI as a tool for assisting with all these tasks, but the inquiry will remain the same: was the use of AI part of a reasonable claims handling process?

2. How AI-Driven Claims Processes Can Lead to Bad Faith Litigation

One of the most common allegations in a bad faith claim is that the insurer failed to conduct a reasonable investigation.  An overreliance on AI as a shortcut can make insurers more vulnerable to these claims.  For instance, the automation of triage or scoring can lead to early denials or low valuations based on patterns rather than facts specific to individual claims.  Many adjusters may be overly deferential to such flawed AI outputs, even where evidence and their own experience points to a differing conclusion.  

The scalability of AI processes also presents enormous risk to insurers.  The potential liability created through the flawed claims handling process of a single adjuster is limited to the files worked by that adjuster.  Errors inherent in an AI process rolled out across a company could create far greater exposure.

Bad faith litigation also frequently involves claims that an insurer failed to adequately explain its claim decision.  AI can increase this risk by leading to decisions that are difficult to translate into human terms.  This creates two problems.  First, it leads to opaque claim denials that are likely to frustrate an insured and lead to litigation.  Second, it makes it more difficult for the insurer to prevail in that litigation: if an adjuster cannot explain why a claim was denied or reconstruct how a decision was made, the court is unlikely to find that they reached their decision reasonably.

3. Practical Steps Insurers Can Take to Use AI Without Increasing Bad Faith Risk

Insurers can mitigate these risks in the following ways:

  • Use good products and processes: take care to implement only those AI products that have been vetted and tested, and involve senior adjusters in this process to ensure the product will be a good fit and to minimize problems with its adoption.
  • Keep humans in the loop: make sure that AI is a tool being leveraged by humans, and not the other way around. Humans should be the final decision maker on all claims, and perform a review of all communications denying claims. 
  • Perform audits: Seasoned adjusters should perform audits of all AI processes to ensure compliance with claims handling policies and procedures. These audits should be more aggressive each time a new AI product is integrated into a process.
Takeaway

AI has the potential to produce better outcomes for insurers and their insureds by increasing the speed, consistency, and accuracy of the claims handling process.  But these improvements do not come without risk.  Any automation — including from AI — can increase the likelihood and scale of mistakes, raising the chances that an insured will make a bad faith claim and that a court will decide that claim against the insurer.  Insurers can mitigate these risks and still leverage the benefits of AI by making sure that their claims handling process utilizes effective human oversight.


When one of your cases is in need of a construction expert, estimates, insurance appraisal or umpire services in defect or insurance disputes – please call Advise & Consult, Inc. at 888.684.8305, or email experts@adviseandconsult.net.

Leave a Reply