Curated by THEOUTPOST
On Mon, 12 May, 12:04 AM UTC
3 Sources
[1]
Insurers launch cover for losses caused by AI chatbot errors
Insurers at Lloyd's of London have launched a product to cover companies for losses caused by malfunctioning artificial intelligence tools, as the sector aims to profit from concerns about the risk of costly hallucinations and errors by chatbots. The policies developed by Armilla, a start-up backed by Y Combinator, will cover the cost of court claims against a company if it is sued by a customer or another third party who has suffered harm because of an AI tool underperforming. The insurance will be underwritten by several Lloyd's insurers and will cover costs such as damages payouts and legal fees. Companies have rushed to adopt AI to boost efficiency but some tools, including customer service bots, have faced embarrassing and costly mistakes. Such mistakes can occur, for example, because of flaws which cause AI language models to "hallucinate" or make things up. Virgin Money apologised in January after its AI-powered chatbot reprimanded a customer for using the word "virgin", while courier group DPD last year disabled part of its customer service bot after it swore at customers and called its owner the "worst delivery service company in the world". A tribunal last year ordered Air Canada to honour a discount that its customer service chatbot had made up. Armilla said that the loss from selling the tickets at a lower price would have been covered by its insurance policy if Air Canada's chatbot was found to have performed worse than expected. Karthik Ramakrishnan, Armilla chief executive, said the new product could encourage more companies to adopt AI, since many are currently deterred by fears that tools such as chatbots will break down. Some insurers already include AI-related losses within general technology errors and omissions policies, but these generally include low limits on payouts. A general policy that covers up to $5mn in losses might stipulate a $25,000 sublimit for AI-related liabilities, said Preet Gill, a broker at Lockton, which offers Armilla's products to its clients. AI language models are dynamic, meaning they "learn" over time. But losses from errors caused by this process of adaptation would not normally be covered by typical technology errors and omissions policies, said Logan Payne, a broker at Lockton. A mistake by an AI tool would not on its own be enough to trigger a payout under Armilla's policy. Instead, the cover would kick in if the insurer judged that the AI had performed below initial expectations. For example, Armilla's insurance could pay out if a chatbot gave clients or employees correct information only 85 per cent of the time, after initially doing so in 95 per cent of cases, the company said. "We assess the AI model, get comfortable with its probability of degradation, and then compensate if the models degrade," said Ramakrishnan. Tom Graham, head of partnership at Chaucer, an insurer at Lloyd's that is underwriting the policies sold by Armilla, said his group would not sign policies covering AI systems they judge to be excessively prone to breakdown. "We will be selective, like any other insurance company," he said.
[2]
Artificial Intelligence Insurance? This Startup Will Cover the Costs of AI Mistakes
Lloyds of London, acting through a Toronto-based startup called Armilla, has begun to offer a new type of insurance cover to companies for the artificial intelligence era: Its new policy can help cover against losses caused by AI. While Lloyds, and its partner, are simply capitalizing on the AI trend -- in the same way they'd insure against other new phenomena, in an effort to drive their own revenues -- the move is a reminder that AI is both powerful and still a potential business risk. And if you thought adopting AI tools would help you push down the cost of operating your business, the advent of this policy is also a reminder that you need to check if AI use might actually bump some of your costs (like insurance) up.
[3]
Insurers Begin Covering AI Mishap-Related Losses | PYMNTS.com
Lloyd's of London has debuted an insurance product for companies dealing with artificial intelligence (AI)-related malfunctions. As the Financial Times (FT) reported Sunday (May 11), this launch is happening as the insurance industry tries to capitalize on concerns about the risk of losses from AI chatbot errors or hallucinations. The policies are offered through a startup called Armilla and will cover the cost of court claims against a business if it is sued by a customer or other third party harmed by an underperforming AI product, the report said. As the FT noted, while companies have embraced AI to increase efficiency, some tools, such as customer service bots, have yielded embarrassing, costly mistakes due to hallucinations, or when an AI model makes things up but delivers this information with confidence. As PYMNTS has written, the consequences of acting on hallucinated information can be severe, leading to flawed decisions, financial losses, and damage to a company's reputation. There are also difficult questions surrounding accountability when AI systems are involved. "If you remove a human from a process or if the human places its responsibility on the AI, who is going to be accountable or liable for the mistakes?" asked Kelwin Fernandes, CEO of NILG.AI, a company specializing in AI solutions, in an interview with PYMNTS last week. In many cases, it's the company behind the chatbot that takes the blame. For example, Virgin Money issued an apology earlier this year when its chatbot chastised a customer for using the word "virgin." And Air Canada ended up in court last year when its chatbot fabricated a discount in a conversation with a customer. According to the FT report, Armilla said the loss from selling the tickets at the discounted price would have been covered by its policy if Air Canada's chatbot was found to have performed below expectations. Meanwhile, PYMNTS explored Lloyds Bank's in-house efforts to adopt AI amid worries about hallucinations in a report earlier this year. "That was something we were quite concerned about, probably for the first 12 or 18 months," Lloyds Bank Chief Data and Analytics Officer Ranil Boteju said during a Google roundtable discussion on AI. The company decided that "until such time as we have confidence in the guardrails, we will not expose any of the generative AI capabilities directly to customers." At first, Lloyds focused on back-office efficiencies as the first use cases, or they had a human worker on hand to keep an eye on activities.
Share
Share
Copy Link
Insurers at Lloyd's of London have introduced a new insurance product to cover companies against losses caused by malfunctioning AI tools, addressing growing concerns about AI errors and hallucinations.
In a significant move reflecting the growing importance and potential risks of artificial intelligence (AI) in business operations, insurers at Lloyd's of London have launched a new insurance product. This innovative coverage is designed to protect companies from losses caused by malfunctioning AI tools, particularly addressing concerns about costly errors and hallucinations in AI chatbots 1.
The policies, developed by Y Combinator-backed startup Armilla, will cover costs related to court claims against a company if it is sued by customers or third parties who have suffered harm due to an underperforming AI tool. The coverage includes damages payouts and legal fees, underwritten by several Lloyd's insurers 1.
As companies rush to adopt AI for efficiency gains, some have faced embarrassing and costly mistakes, particularly with customer service bots. These errors often occur due to flaws causing AI language models to "hallucinate" or generate false information. Notable incidents include Virgin Money's chatbot reprimanding a customer for using the word "virgin" and Air Canada being ordered to honor a discount fabricated by its chatbot 1 3.
Armilla's insurance policy is triggered not by a single mistake, but when the AI tool's performance falls below initial expectations. For instance, if a chatbot's accuracy in providing correct information drops from 95% to 85%, it could lead to a payout 1.
While some insurers already include AI-related losses in general technology errors and omissions policies, these typically have low payout limits. Armilla's product aims to provide more comprehensive coverage for AI-specific risks 1.
Karthik Ramakrishnan, Armilla's CEO, believes this new product could encourage more companies to adopt AI by mitigating fears of potential breakdowns. However, insurers like Chaucer, an underwriter at Lloyd's, emphasize that they will be selective in signing policies, avoiding coverage for AI systems deemed excessively prone to breakdown 1.
This development highlights the evolving landscape of AI implementation in business and the associated risks. It raises questions about accountability and liability in AI-driven processes, as noted by Kelwin Fernandes, CEO of NILG.AI 3.
The introduction of AI-specific insurance also serves as a reminder that while AI adoption may drive efficiency, it could potentially increase other operational costs, such as insurance premiums 2.
Reference
[1]
As AI technology advances, it's creating new challenges in liability insurance while simultaneously offering potential solutions for streamlining the health insurance industry. This dual impact highlights the complex relationship between AI and the insurance sector.
2 Sources
2 Sources
Allstate's use of AI for customer communications sparks debate over efficiency, empathy, and transparency in the insurance industry.
2 Sources
2 Sources
A new IBM study highlights the disconnect between insurance executives and customers regarding generative AI adoption, with executives seeing it as necessary for competition while customers express reservations about its use in service and advice.
3 Sources
3 Sources
Insurance companies worldwide are embracing AI and other advanced technologies to transform their operations, improve efficiency, and adapt to changing market conditions. This shift is driven by economic pressures, climate change, and evolving customer expectations.
5 Sources
5 Sources
Nikhil Rathi, head of the UK's Financial Conduct Authority, highlights the potential and challenges of AI in finance, emphasizing the need for responsible innovation and consumer protection.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved