2 Sources
2 Sources
[1]
Insurers retreat from AI cover as risk of multibillion-dollar claims mounts
Major insurers are seeking to exclude artificial intelligence risks from corporate policies, as companies face multibillion-dollar claims that could emerge from the fast-developing technology. AIG, Great American and WR Berkley are among the groups that have recently sought permission from US regulators to offer policies excluding liabilities tied to businesses deploying AI tools including chatbots and agents. The insurance industry's reticence to provide comprehensive cover comes as companies have rushed to adopt the cutting-edge technology. This has already led to embarrassing and costly mistakes when models "hallucinate" or make things up. One exclusion WR Berkley proposed would bar claims involving "any actual or alleged use" of AI, including any product or service sold by a company "incorporating" the technology. In response to a request from the Illinois insurance regulator about the exclusions, AIG said in a filing generative AI was a "wide-ranging technology" and the possibility of events leading to future claims will "likely increase over time". AIG told the Financial Times that, although it had filed generative AI exclusions, it "has no plans to implement them at this time". Having approval for the exclusions would give the company the option to implement them later. WR Berkley and Great American declined to comment. Insurers increasingly view AI models' outputs as too unpredictable and opaque to insure, said Dennis Bertram, head of cyber insurance for Europe at Mosaic. "It's too much of a black box." Even Mosaic, a speciality insurer at Lloyd's of London marketplace which offers cover for some AI-enhanced software, has declined to underwrite risks from large language models such as ChatGPT. "Nobody knows who's liable if things go wrong," said Rajiv Dattani, co-founder of the Artificial Intelligence Underwriting Company, an AI insurance and auditing start-up. These moves come amid a growing number of high-profile AI-led mistakes. Wolf River Electric, a solar company, sued Google for defamation and sought at least $110mn in damages, after claiming its AI Overview feature falsely stated the company was being sued by Minnesota's attorney-general. Meanwhile, a tribunal last year ordered Air Canada to honour a discount that its customer service chatbot had made up. Last year, UK engineering group Arup lost HK$200mn (US$25mn) after fraudsters used a digitally cloned version of a senior manager to order financial transfers during a video conference. Aon's head of cyber Kevin Kalinich said the insurance industry can afford to pay a $400mn or $500mn loss to one company that deployed agentic AI that delivered incorrect pricing or medical diagnoses. "What they can't afford is if an AI provider makes a mistake that ends up as a 1,000 or 10,000 losses -- a systemic, correlated, aggregated risk," he added. AI hallucinations typically fall outside standard cyber cover, which is triggered by security or privacy breaches. So-called tech "errors and omissions" policies are more likely to cover AI mistakes, but new carve-outs could narrow the scope of the coverage offered. Ericson Chan, chief information officer of Zurich Insurance, said when insurers evaluated other tech-driven errors, they could "easily identify the responsibility". By contrast, AI risk potentially involves many different parties, including developers, model builders and end users. As a result, the potential market impact of AI-driven risks "could be exponential", he said. Some insurers have moved to clarify legal uncertainty with so-called "endorsements" -- an amendment to a policy -- of AI-related risk. But brokers warn these require close scrutiny because in certain cases this has resulted in less cover. One endorsement by insurer QBE extended some cover for fines and other penalties under the EU's AI Act, considered the world's strictest regime regulating the development of the technology. But the endorsement, which other insurers have since mirrored, limited the payout for fines stemming from the use of AI to 2.5 per cent of the total policy limit, according to a large broker. QBE told the Financial Times it was "addressing the potential gap [in AI-related risk] that may not be covered by other insurance policies". In broker negotiations, Zurich-based Chubb has agreed to terms that would cover some AI risks, but has excluded "widespread" AI incidents, such as a problem with a model that would impact many clients at once. Chubb declined to comment. Meanwhile, others have introduced add-ons covering narrowly defined AI risks -- for instance, a chatbot going haywire. Insurance brokers and lawyers said they feared insurers would start fighting claims in court when AI-driven losses significantly increase. Aaron Le Marquer, head of insurance disputes team at law firm Stewarts, said: "It will probably take a big systemic event for insurers to say, hang on, we never meant to cover this type of event."
[2]
Insurers Uneasy About Covering Corporate AI Risks | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. Companies such as AIG, Great American and WR Berkley have recently asked U.S. regulators for leave to offer policies that exclude liabilities related to companies using AI tools like agents and chatbots, the Financial Times (FT) reported Sunday (Nov. 23). This is happening amid a rush by businesses to adopt AI, leading to some costly errors related to "hallucinations" by the technology. According to the report, WR Berkeley wants to block claims involving "any actual or alleged use" of AI, including products or services sold by companies "incorporating" the technology. And in a filing with the Illinois insurance regulator, AIG said generative AI was a "wide-ranging technology" and the possibility of events triggering future claims will "likely increase over time." The company told the FT that, although it had filed generative AI exclusions, it "has no plans to implement them at this time." "It's too much of a black box," he said, with the report noting that Mosaic covers some AI-enhanced software, but has declined to underwrite risks from large language models (LLMs) like OpenAI's ChatGPT. "Nobody knows who's liable if things go wrong," said Rajiv Dattani, co-founder of the Artificial Intelligence Underwriting Company, an AI insurance and auditing startup. As PYMNTS has written, the consequences of a company following through on hallucinated information can be severe, leading to flawed decisions, financial losses, and reputational harm. There are also tough questions related to accountability when AI systems are concerned. "If you remove a human from a process or if the human places its responsibility on the AI, who is going to be accountable or liable for the mistakes?" Kelwin Fernandes, CEO of NILG.AI, a company specializing in AI solutions, asked in an interview with PYMNTS earlier this year. In many instances, it's the business using the chatbot that takes the blame. For example, Virgin Money had to issue an apology earlier this year when its chatbot chastised a customer for using the word "virgin." And Air Canada found itself in court last year when its chatbot fabricated a discount in a conversation with a prospective passenger.
Share
Share
Copy Link
Leading insurance companies including AIG, Great American, and WR Berkley are seeking regulatory approval to exclude AI-related liabilities from corporate policies, citing unpredictable risks from AI hallucinations and the potential for systemic losses reaching billions of dollars.
Major insurance companies are increasingly seeking to exclude artificial intelligence risks from their corporate policies, as the rapid adoption of AI technology creates potential exposure to multibillion-dollar claims. AIG, Great American, and WR Berkley are among the prominent insurers that have recently requested permission from US regulators to offer policies that exclude liabilities tied to businesses deploying AI tools, including chatbots and automated agents
1
.
Source: Financial Times News
The insurance industry's reluctance to provide comprehensive AI coverage comes as companies have rushed to adopt cutting-edge technology, leading to embarrassing and costly mistakes when AI models "hallucinate" or generate false information. WR Berkley has proposed an exclusion that would bar claims involving "any actual or alleged use" of AI, including any product or service sold by a company "incorporating" the technology
1
.In response to regulatory inquiries, AIG described generative AI as a "wide-ranging technology" and warned that the possibility of events leading to future claims will "likely increase over time." However, the company clarified that while it has filed for generative AI exclusions, it "has no plans to implement them at this time," suggesting the filings are precautionary measures to maintain future flexibility
1
.Insurers increasingly view AI models' outputs as too unpredictable and opaque to insure effectively. Dennis Bertram, head of cyber insurance for Europe at Mosaic, characterized the challenge succinctly: "It's too much of a black box." Even specialty insurers like Mosaic, which operates in Lloyd's of London marketplace and offers coverage for some AI-enhanced software, have declined to underwrite risks from large language models such as ChatGPT
1
.Several costly AI-related incidents have highlighted the potential financial exposure facing insurers. Wolf River Electric, a solar company, sued Google for defamation seeking at least $110 million in damages after claiming Google's AI Overview feature falsely stated the company was being sued by Minnesota's attorney-general. In another case, a tribunal ordered Air Canada to honor a discount that its customer service chatbot had fabricated during a customer interaction
1
.Perhaps most dramatically, UK engineering group Arup lost HK$200 million (US$25 million) after fraudsters used a digitally cloned version of a senior manager to authorize financial transfers during a video conference, demonstrating the sophisticated nature of AI-enabled fraud
1
.Related Stories
Kevin Kalinich, Aon's head of cyber insurance, explained that while the industry could absorb individual losses of $400-500 million from a single company's AI deployment errors, the real concern lies in systemic risks. "What they can't afford is if an AI provider makes a mistake that ends up as a 1,000 or 10,000 losses -- a systemic, correlated, aggregated risk," he noted
1
.The complexity of AI liability chains compounds these concerns. Unlike traditional technology errors where responsibility can be easily identified, AI risks potentially involve multiple parties including developers, model builders, and end users. Ericson Chan, chief information officer of Zurich Insurance, warned that the potential market impact of AI-driven risks "could be exponential"
1
.Some insurers have attempted to address the coverage gap through policy endorsements and specialized add-ons. QBE extended some coverage for fines under the EU's AI Act, though with significant limitations, capping AI-related fine coverage at just 2.5 percent of the total policy limit. Other insurers have since adopted similar restrictive approaches
1
.Zurich-based Chubb has agreed to cover some AI risks in broker negotiations but has excluded "widespread" AI incidents that could impact multiple clients simultaneously. Meanwhile, other insurers have introduced narrow add-ons covering specific scenarios, such as chatbot malfunctions
1
.Summarized by
Navi
[1]
1
Technology

2
Technology

3
Business and Economy
