4 Sources
4 Sources
[1]
Major insurers move to ring-fence AI liability as multi-billion dollar risks emerge -- Recent public incidents have lead to costly repercussions
Major insurers seek permission to exclude AI-related claims from corporate policies. Major insurers are moving to ring-fence their exposure to artificial intelligence failures, after a run of costly and highly public incidents pushed concerns about systemic, correlated losses to the top of the industry's risk models. According to the Financial Times, AIG, WR Berkley, and Great American have each sought regulatory clearance for new policy exclusions that would allow them to deny claims tied to the use or integration of AI systems, including chatbots and agents. The requests arrive at a time when companies across virtually all sectors have accelerated adoption of generative tools. That shift has already produced expensive errors. Google is facing a $110 million defamation suit after its AI Overview feature incorrectly claimed a solar company was being sued by a state attorney-general. Meanwhile, Air Canada was ordered to honor a discount invented by its customer-service chatbot, and UK engineering firm Arup lost £20 million after staff were duped by a digitally cloned executive during a video-call scam. Those incidents have made it harder for insurers to quantify where liability begins and ends. Mosaic Insurance told the FT that outputs from large language models remain too unpredictable for traditional underwriting, describing them as "a black box." Even Mosaic, which markets specialist cover for AI-enhanced software, has declined to underwrite risks from LLMs like ChatGPT. As a workaround, a potential WR Berkley exclusion would bar claims tied to "any actual or alleged use" of AI, even if the technology forms only a minor part of a product or workflow. AIG told regulators it had "no plans to implement" its proposed exclusions immediately, but wants the option available as the frequency and scale of claims increase. At issue is not only the severity of individual losses but the threat of widespread, simultaneous damage triggered by a single underlying model or vendor. Kevin Kalinich, Aon's head of cyber, told the paper that the industry could absorb a $400 million or $500 million hit from a misfiring agent used by one company. What it cannot absorb, he says, is an upstream failure that produces a thousand losses at once, which he described as a "systemic, correlated, aggregated risk." Some carriers have moved toward partial clarity through policy endorsements. QBE introduced one extending limited coverage for fines under the EU AI Act, capped at 2.5% of the insured limit. Chubb has agreed to cover certain AI-related incidents while excluding any event capable of affecting "widespread" incidents simultaneously. Brokers say these endorsements must be read closely, as some reduce protection while appearing to offer new guarantees. As regulators and insurers reshape their positions, businesses may find that the risk of deploying AI now sits more heavily on their own balance sheets than they expected.
[2]
Insurers retreat from AI cover as risk of multibillion-dollar claims mounts
Major insurers are seeking to exclude artificial intelligence risks from corporate policies, as companies face multibillion-dollar claims that could emerge from the fast-developing technology. AIG, Great American and WR Berkley are among the groups that have recently sought permission from US regulators to offer policies excluding liabilities tied to businesses deploying AI tools including chatbots and agents. The insurance industry's reticence to provide comprehensive cover comes as companies have rushed to adopt the cutting-edge technology. This has already led to embarrassing and costly mistakes when models "hallucinate" or make things up. One exclusion WR Berkley proposed would bar claims involving "any actual or alleged use" of AI, including any product or service sold by a company "incorporating" the technology. In response to a request from the Illinois insurance regulator about the exclusions, AIG said in a filing generative AI was a "wide-ranging technology" and the possibility of events leading to future claims will "likely increase over time". AIG told the Financial Times that, although it had filed generative AI exclusions, it "has no plans to implement them at this time". Having approval for the exclusions would give the company the option to implement them later. WR Berkley and Great American declined to comment. Insurers increasingly view AI models' outputs as too unpredictable and opaque to insure, said Dennis Bertram, head of cyber insurance for Europe at Mosaic. "It's too much of a black box." Even Mosaic, a speciality insurer at Lloyd's of London marketplace which offers cover for some AI-enhanced software, has declined to underwrite risks from large language models such as ChatGPT. "Nobody knows who's liable if things go wrong," said Rajiv Dattani, co-founder of the Artificial Intelligence Underwriting Company, an AI insurance and auditing start-up. These moves come amid a growing number of high-profile AI-led mistakes. Wolf River Electric, a solar company, sued Google for defamation and sought at least $110mn in damages, after claiming its AI Overview feature falsely stated the company was being sued by Minnesota's attorney-general. Meanwhile, a tribunal last year ordered Air Canada to honour a discount that its customer service chatbot had made up. Last year, UK engineering group Arup lost HK$200mn (US$25mn) after fraudsters used a digitally cloned version of a senior manager to order financial transfers during a video conference. Aon's head of cyber Kevin Kalinich said the insurance industry can afford to pay a $400mn or $500mn loss to one company that deployed agentic AI that delivered incorrect pricing or medical diagnoses. "What they can't afford is if an AI provider makes a mistake that ends up as a 1,000 or 10,000 losses -- a systemic, correlated, aggregated risk," he added. AI hallucinations typically fall outside standard cyber cover, which is triggered by security or privacy breaches. So-called tech "errors and omissions" policies are more likely to cover AI mistakes, but new carve-outs could narrow the scope of the coverage offered. Ericson Chan, chief information officer of Zurich Insurance, said when insurers evaluated other tech-driven errors, they could "easily identify the responsibility". By contrast, AI risk potentially involves many different parties, including developers, model builders and end users. As a result, the potential market impact of AI-driven risks "could be exponential", he said. Some insurers have moved to clarify legal uncertainty with so-called "endorsements" -- an amendment to a policy -- of AI-related risk. But brokers warn these require close scrutiny because in certain cases this has resulted in less cover. One endorsement by insurer QBE extended some cover for fines and other penalties under the EU's AI Act, considered the world's strictest regime regulating the development of the technology. But the endorsement, which other insurers have since mirrored, limited the payout for fines stemming from the use of AI to 2.5 per cent of the total policy limit, according to a large broker. QBE told the Financial Times it was "addressing the potential gap [in AI-related risk] that may not be covered by other insurance policies". In broker negotiations, Zurich-based Chubb has agreed to terms that would cover some AI risks, but has excluded "widespread" AI incidents, such as a problem with a model that would impact many clients at once. Chubb declined to comment. Meanwhile, others have introduced add-ons covering narrowly defined AI risks -- for instance, a chatbot going haywire. Insurance brokers and lawyers said they feared insurers would start fighting claims in court when AI-driven losses significantly increase. Aaron Le Marquer, head of insurance disputes team at law firm Stewarts, said: "It will probably take a big systemic event for insurers to say, hang on, we never meant to cover this type of event."
[3]
Insurance Companies Are Terrified to Cover AI, Which Should Probably Tell You Something
As major corporations go, insurance companies are about the closest thing we have to rational actors. With the job of underwriting a huge range of financial risks in a volatile market economy, the buck stops with them -- meaning that by and large, they're not going to insure any new product that's particularly risky or untested. In other words, insurance companies have to be practical to stay operational, even when the stock market wants to be anything but. The industry is a huge believer in climate change, for example, for the simple reason that hurricanes, wildfires, and droughts can all have a major impact on their bottom line. That raises an interesting question: how does the insurance industry feel about AI, a highly experimental technology with almost no track record of financial success? New reporting by the Financial Times reveals some unease: top insurance firms like AIG, American Financial Group's Great American, and WR Berkley are begging US regulators to let them exclude AI liability from their policies. Basically, the companies are concerned about fielding multibillion-dollar claims, reflecting greater anxiety about AI's potential to cause costly and unpredictable damage to corporate revenue. WR Berkley, for example, is asking for permission not to cover claims stemming from "any actual or alleged use" of AI, including any product or service sold by a company "incorporating" the software. In AIG's case, the exclusion requests seem to be a precaution, with a spokesperson telling the FT that the company has "no plans to implement them at this time." There have been other signs that the insurance industry has been growing increasingly anxious about AI risks in recent months. Cybersecurity policies, which act like a financial safety net in the event of costly cyberattacks such as ransomeware or data breaches, are a particular area of concern for the industry. That's thanks to AI itself, which is helping black hat criminals breed new kinds of malware, while also introducing new vulnerabilities to companies that deploy AI tools. "It's too much of a black box," Dennis Bertram, Europe head of cyber insurance for Mosaic told the FT. Representing a major specialty insurer with a key focus on cyber risk, Bertram's fears are noteworthy. Though Mosaic covers some risks related to specific software where AI is embedded, the firm chooses not to cover risks from large language models (LLMs) like ChatGPT and Claude, the FT notes. With the US economy banking almost entirely on AI for growth at this point, insurance companies going out of their way not to insure AI isn't exactly a comforting sign.
[4]
Insurers Uneasy About Covering Corporate AI Risks | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. Companies such as AIG, Great American and WR Berkley have recently asked U.S. regulators for leave to offer policies that exclude liabilities related to companies using AI tools like agents and chatbots, the Financial Times (FT) reported Sunday (Nov. 23). This is happening amid a rush by businesses to adopt AI, leading to some costly errors related to "hallucinations" by the technology. According to the report, WR Berkeley wants to block claims involving "any actual or alleged use" of AI, including products or services sold by companies "incorporating" the technology. And in a filing with the Illinois insurance regulator, AIG said generative AI was a "wide-ranging technology" and the possibility of events triggering future claims will "likely increase over time." The company told the FT that, although it had filed generative AI exclusions, it "has no plans to implement them at this time." "It's too much of a black box," he said, with the report noting that Mosaic covers some AI-enhanced software, but has declined to underwrite risks from large language models (LLMs) like OpenAI's ChatGPT. "Nobody knows who's liable if things go wrong," said Rajiv Dattani, co-founder of the Artificial Intelligence Underwriting Company, an AI insurance and auditing startup. As PYMNTS has written, the consequences of a company following through on hallucinated information can be severe, leading to flawed decisions, financial losses, and reputational harm. There are also tough questions related to accountability when AI systems are concerned. "If you remove a human from a process or if the human places its responsibility on the AI, who is going to be accountable or liable for the mistakes?" Kelwin Fernandes, CEO of NILG.AI, a company specializing in AI solutions, asked in an interview with PYMNTS earlier this year. In many instances, it's the business using the chatbot that takes the blame. For example, Virgin Money had to issue an apology earlier this year when its chatbot chastised a customer for using the word "virgin." And Air Canada found itself in court last year when its chatbot fabricated a discount in a conversation with a prospective passenger.
Share
Share
Copy Link
Leading insurance companies including AIG, WR Berkley, and Great American are seeking regulatory approval to exclude AI-related liabilities from corporate policies, citing unpredictable risks and potential for systemic losses from AI failures.
Major insurance companies like AIG, WR Berkley, and Great American are seeking regulatory permission in the U.S. to exclude AI-related liabilities from corporate policies. This move stems from the escalating and unpredictable financial risks posed by AI technologies, which traditional insurance models struggle to quantify
1
2
. WR Berkley’s proposed exclusion, for instance, broadly covers any product or service "incorporating" AI, indicating deep concern over widespread liability.
Source: FT
This industry shift follows several high-cost AI blunders. Google is facing a $110 million defamation lawsuit after its AI Overview falsely accused a company of being sued. Air Canada was compelled by a tribunal to honor a discount invented by its chatbot. Additionally, a UK engineering firm, Arup, lost HK$200 million (approximately $25 million) to fraudsters who used AI-cloned voices and video of a senior manager to authorize transfers
1
2
. These incidents highlight the tangible and substantial financial risks associated with current AI applications.
Source: Futurism
Insurance experts find AI particularly challenging to underwrite due to its opaque nature. Dennis Bertram from Mosaic Insurance described it as "too much of a black box," making it difficult to assess risk or assign liability
2
3
. Rajiv Dattani of the Artificial Intelligence Underwriting Company noted, "Nobody knows who's liable if things go wrong"2
4
.Beyond individual claims, insurers are deeply worried about systemic, correlated losses. Kevin Kalinich of Aon warned that while a single company's AI failure causing a $400-500 million loss is manageable, the industry cannot absorb scenarios where a single upstream AI mistake triggers thousands of simultaneous losses across many clients
1
2
. Ericson Chan of Zurich Insurance highlighted that unlike traditional tech errors where responsibility is clear, AI risk involves multiple parties and could have an "exponential" market impact2
.
Source: PYMNTS
Related Stories
Despite the widespread exclusions, some insurers are offering niche, limited AI coverage. QBE, for example, has introduced coverage for fines under the EU AI Act, though with significant caps. Chubb provides coverage for certain AI incidents but explicitly excludes widespread, simultaneous events. Insurance brokers caution businesses to carefully review these endorsements, as they may offer less protection than they seem, potentially shifting more risk onto companies' own balance sheets
1
2
.Summarized by
Navi
[1]
1
Technology

2
Technology

3
Policy and Regulation
