Katie Dwyer | Risk & Insurance
The use of artificial intelligence in insurance continues to grow. Here’s how the industry is using AI to optimize and augment its processes while keeping people at the center of the equation.
“Artificial intelligence” can sometimes summon images of robots run amok, or computers-turned-sentient-beings manipulating their users. It can stir up fears of everyday people losing jobs to technology. Or hopes of those jobs instead becoming easier.
AI is a powerful tool with the potential to streamline and speed up many functions, but it can also misinterpret information or mislead users if left unchecked. Today, AI is reshaping the way insurers do business –mostly for the better – but it comes with its own set of risks that the industry is carefully considering as it adopts and implements new tools.
Here’s how AI is actually being used in the insurance industry today, where it may go, and how insurers are managing the associated risks.
Data Collection, Aggregation and Summarization
Insurers handle massive amounts of data. AI has had the biggest impact as a tool to facilitate gathering, analyzing and summarizing that data to expedite both claims and underwriting processes.
“Carriers are using AI for intelligent document processing, which is rapidly replacing old ‘scan and index’ systems, with LLMs now handling OCR, indexing, entity extraction, and document summarization for both claims and underwriting,” said John Kain, head of worldwide financial services market development at Amazon Web Services.
“These LLMs capture documents and understand what’s in them – whether for claims processing or underwriting decisions. Today, multiple carriers use IDP for attending physician statements and claims document ingestion, which is cutting manual processing times from days to minutes and delivering massive ROI efficiency gains.”
Ray Ash, executive vice president and head of financial lines with Westfield Specialty, described underwriters spending hours poring over boxes of submission documents in order to assess a new risk. Today, that process can be cut down from hours to minutes with the help of data scraping tools. Ash also described the potential for AI to help capture data from face-to-face roundtable conversations about risk.
“We would like for AI to help accelerate our process by summarizing our conversations about risks, using language recognition software to capture the salient points. But it may capture factually incorrect statements within those conversations, so there are limitations on the technology,” he said.
Ray Ash, executive vice president and head of financial lines, Westfield Specialty
AI platforms can also pull data from third-party sources, providing underwriters with a more comprehensive view of a risk in a fraction of the time.
“With publicly traded companies, there is a lot of financial information that is publicly available. We would like to use AI to summarize and neatly package that information as part of the submission process,” Ash said.
Risk Stratification and Claim Triage
Going a step further, AI platforms can take the information it has summarized and make suggestions or recommendations. Underwriters, for example, could utilize AI to help categorize risks according to their company’s internal criteria and unique risk tolerance level.
“Can we go a step further and teach an AI tool the metrics that we use to evaluate risk so that it can provide informed suggestions on a new submission? When you get bloodwork done for your doctor, the results show an acceptable range for each value. Can we teach acceptable ranges to an AI platform so it can help classify risks? If we could enable AI to reliably and consistently analyze such risks it would help underwriters improve the speed and accuracy of their decision making process,” Ash said.
The same classification function can also streamline claims processing by identifying those claims that may require more attention from the claims professional. In the world of workers’ comp, for example, predictive models can flag claims with factors that make them more likely to go off the rails – such as injuries to certain body parts or the presence of comorbid conditions – which triggers more involvement from the claim handler or deploys additional resources to help keep the claim on track.
Importantly, these tools are meant to assist or augment underwriting and claims decisions, not make those decisions independently.
“We’re trying to use AI to augment and accelerate our underwriting process, but you can’t completely rely on the “answers” that AI generates. It’s not Encyclopedia Britannica. It can help inform our learning but it’s not a true source,” Ash said.
Potential for Straight Through Processing
While there may be potential for an AI platform to handle end-to-end processing of very simple, clear cut claims, this is more likely in personal lines. It’s very unlikely that straight through processing will take hold in the world of complex commercial risk.
“It’s important to note that AI isn’t making claims and underwriting decisions – rather, it enables automation of the processes that leads to those decisions, which ultimately empowers insurers to more effectively speed help to those that need it most. For regulatory reasons, fully autonomous agents making final underwriting or claims decisions are not likely anytime soon, but reducing the time and effort required from underwriters and claims professionals in many scenarios is a realistic expectation,” Kain said.
The Risk of Bias and Discrimination
Leaning on AI to inform underwriting and claims decisions demands a certain level of trust that the data being fed into the platform is accurate and complete, and that the platform’s interpretation of that data is untinged by bias. But AI platforms are fallible. The risk of discrimination is one of the industry’s top concerns.
“Some of the issues being raised in this context include the use of data and if it is being used in a biased or discriminatory way. Who’s actually making a claim decision and for what purpose? What’s the role of technology vs human judgment? How is AI being applied? What factors could lead to an incomplete, inaccurate or unfair decision?” said Carolyn Rosenberg, partner, Reed Smith LLP.
“Large language models are only as good as the people who build them and the data they analyze,” Ash said. “One of our biggest concerns is keeping bias and hallucinations out. You need a robust set of checks and balances, because even one error is a big problem. For one thing, because we are such a heavily regulated industry, the financial penalties and reputational damage that could result if AI goes rogue are too great, it’s not worth the risk. And that would destroy the trust within a client relationship that likely took years to build.”
If inaccurate data is used or bias does creep into the algorithm, it can generate recommendations that lead to suboptimal or incorrect decisions by insurance professionals. Some policyholders have already brought lawsuits alleging unfair practices or bad faith decisions that ultimately were influenced by AI.
“The top liability for use in AI is lawsuits,” said Robert Tomilson, member, Clark Hill Law. “We have already seen dozens of lawsuits over the use of AI in claims handling. A review of those lawsuits shows that plaintiffs neither understand AI nor were in any way injured. But the lawsuits persist. Soon this will make its way to underwriting, pricing and denials of coverage. It is a huge waste of resources and will slow but not stop implementation.”
Ensuring that any AI-generated recommendations remain unbiased calls for close monitoring of AI platforms and regular audits. There is also a growing focus on transparency. In most states, insurers are not obligated to disclose exactly how they are using AI, but it will be a best practice going forward to share the extent of its influence with clients who may be concerned about fairness in insurance decisions.
“The biggest focus is on transparency. Right now any laws requiring the disclosure of use of AI is state-specific, but I have to assume there will be more demands for transparency as AI continues to evolve. It’s the new paradigm. As the technology evolves, the legal and regulatory landscape will evolve with it,” said Anthony Crawford, partner, Reed Smith LLP.
To that end, the NAIC released its model bulletin on the use of artificial intelligence by insurers in December 2023, providing a template for states wishing to apply more regulatory oversight of the technology. Indeed, many states are looking to place some boundaries on the use of AI in insurance. As of August 5 2025, 23 states and Washington D.C. have adopted the model bulletin, while four states have implemented specific insurance regulations around AI.
In July 2024, the New York State Department of Financial Services enacted Circular Letter No. 7, which sets standards for the use of AI in underwriting and pricing with the aim of ensuring fairness and transparency. It calls for insurers to establish internal governance frameworks for the continual oversight of AI systems and develop clear and simple explanations of how AI factors into decision making.
“The framework established by the NY Circular is a very good starting point. It requires that insurance companies closely track their use, implementation, and results of AI. Keeping a good record, a daily journal even, is best practice,” Tomilson said.
Risk Mitigation Strategies
The input provided by AI should not necessarily change an insurer’s exposure to bad faith claims or the way in which they manage that risk.
Humans are susceptible to mistakes and bias just as any AI platform could be, and errors will occur regardless of whether a person or a computer is responsible. The system of checks and balances that any insurer or claims organization would already have in place should apply across the board, regardless of how certain recommendations or decisions were made.
John Kain, head of worldwide financial services market development, Amazon Web Services
“It’s very important to have the right team monitoring the use of AI for various reasons, such as to ensure there’s no introduction of bias, and that the tool is performing the function it was meant to address. Things can go sideways without close monitoring, and you could very quickly lose credibility and the trust of your customer base, which is very difficult to rebuild.” said Nora Benanti, SVP, head of claims analytics at Arch Capital Group.
Ash describes how before any quote is delivered to a potential client, an underwriter would need to explain their reasoning and get approval from a manager of higher authority. Because large commercial risks are often complex, final decisions are collaborative and must be vetted. That element would not go away even as AI platforms ingest more data and get “smarter.”
“Insurers have managed AI risks for many years and have carefully established robust governance and risk frameworks to do so. As with all new technologies and innovations, good governance is key: creating robust governance frameworks, ensuring deployment complies with relevant regulation, adopting in alignment with each organization’s vendor risk appetite, and, for regulators, identifying and mitigating any systemic risks. In the same way as they created governance and risk frameworks for traditional AI, insurers are extending this to address unique generative AI risks,” Kain said.
Human-Centricity and the Element of Trust
Where will AI go in the future? Given its risks and limitations, how will the technology continue to evolve in the insurance industry? Most hope it will get better or more reliable in its current functions collecting and analyzing data.
The better AI gets at handling huge amounts of data, and the more accurate its summaries and resultant recommendations are, the more manual work moves off the desk of the claims professionals or underwriters. Increased speed and accuracy of AI can then allow those professionals to spend more time on higher value activities, like engaging directly with clients, building relationships and handling more cases than they may have been able to in the past.
While increased efficiency and productivity are the primary advantages offered by AI, this can also lead to better customer service and ultimately higher rates of client retention.
Benanti pointed out that the experience created for clients, who have no direct contact with the insurance company’s AI platform at all, is one of the most important factors to consider in the implementation of AI tools. Mistakes or misuse of technology ultimately damage credibility and lose the trust of clients. Relationships remain at the core of the insurance business.
“Trust is critical to our business. The type of insurance we write is very sophisticated and directly impacts the executive team and the board of directors. That pool of individuals doesn’t want to trust an online solution. They want to work with expert brokers and underwriters who have demonstrated a deep understanding of their business. In our world of complex insurance risks, AI cannot eliminate humans in the underwriting process,” Ash said.
The same is true in claims. The claims experience is where insurers essentially deliver on the promise of the policy, and it is the experience by which clients will judge their carrier and determine whether they should stick with that carrier or go elsewhere. Streamlining the claim professional’s workflow can not only reduce the risk of error, but also allow for more interaction with clients. Faster and more frequent communication engenders trust, even if the claim result is not what the client hoped for.
“AI has the potential to augment a claims professional’s processes through the lifecycle of a claim,” Benanti said.
“It can allow the claims professional to spend time on the more nuanced or complex parts of the claim process. I see AI being used to support or accelerate human decision-making, not replace it, especially with more complex claims. I think future applications of AI will remain human-centric and there will always be a need for oversight.”
While AI may become a better assistant to insurance professionals, it can’t replace the nuanced knowledge that comes with years of experience in the business, and it can’t replicate the soft skills that are necessary to build and maintain relationships.
However, the ability to offload decision-making onto an AI platform will be tempting. Excitement over new tools can potentially outstrip their actual capabilities.
“As AI evolves, the potential for humans to become overly dependent on AI recommendations must be carefully managed, with guardrails in place to mitigate over-reliance and automation bias, where humans fail to catch errors or use AI in place of their own critical thinking skills,” Kain said.
When one of your cases is in need of a construction expert, estimates, insurance appraisal or umpire services in defect or insurance disputes – please call Advise & Consult, Inc. at 888.684.8305, or email experts@adviseandconsult.net.