3 Sources
[1]
Former Anthropic exec raises $15M to insure AI agents and help startups deploy safely
Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now A new startup founded by a former Anthropic executive has raised $15 million to solve one of the most pressing challenges facing enterprises today: how to deploy artificial intelligence systems without risking catastrophic failures that could damage their businesses. The Artificial Intelligence Underwriting Company (AIUC), which launches publicly today, combines insurance coverage with rigorous safety standards and independent audits to give companies confidence in deploying AI agents -- autonomous software systems that can perform complex tasks like customer service, coding, and data analysis. The seed funding round was led by Nat Friedman, former GitHub CEO, through his firm NFDG, with participation from Emergence Capital, Terrain, and several notable angel investors including Ben Mann, co-founder of Anthropic, and former chief information security officers at Google Cloud and MongoDB. "Enterprises are walking a tightrope," said Rune Kvist, AIUC's co-founder and CEO, in an interview. "On the one hand, you can stay on the sidelines and watch your competitors make you irrelevant, or you can lean in and risk making headlines for having your chatbot spew Nazi propaganda, or hallucinating your refund policy, or discriminating against the people you're trying to recruit." The company's approach tackles a fundamental trust gap that has emerged as AI capabilities rapidly advance. While AI systems can now perform tasks that rival human undergraduate-level reasoning, many enterprises remain hesitant to deploy them due to concerns about unpredictable failures, liability issues, and reputational risks. Creating security standards that move at AI speed AIUC's solution centers on creating what Kvist calls "SOC 2 for AI agents" -- a comprehensive security and risk framework specifically designed for artificial intelligence systems. SOC 2 is the widely-adopted cybersecurity standard that enterprises typically require from vendors before sharing sensitive data. "SOC 2 is a standard for cybersecurity that specifies all the best practices you must adopt in sufficient detail so that a third party can come and check whether a company meets those requirements," Kvist explained. "But it doesn't say anything about AI. There are tons of new questions like: how are you handling my training data? What about hallucinations? What about these tool calls?" The AIUC-1 standard addresses six key categories: safety, security, reliability, accountability, data privacy, and societal risks. The framework requires AI companies to implement specific safeguards, from monitoring systems to incident response plans, that can be independently verified through rigorous testing. "We take these agents and test them extensively, using customer support as an example since that's easy to relate to. We try to get the system to say something racist, to give me a refund I don't deserve, to give me a bigger refund than I deserve, to say something outrageous, or to leak another customer's data. We do this thousands of times to get a real picture of how robust the AI agent actually is," Kvist said. From Benjamin Franklin's fire insurance to AI risk management The insurance-centered approach draws on centuries of precedent where private markets moved faster than regulation to enable the safe adoption of transformative technologies. Kvist frequently references Benjamin Franklin's creation of America's first fire insurance company in 1752, which led to building codes and fire inspections that tamed the blazes ravaging Philadelphia's rapid growth. "Throughout history, insurance has been the right model for this, and the reason is that insurers have an incentive to tell the truth," Kvist explained. "If they say the risks are bigger than they are, someone's going to sell cheaper insurance. If they say the risks are smaller than they are, they're going to have to pay the bill and go out of business." The same pattern emerged with automobiles in the 20th century, when insurers created the Insurance Institute of Highway Safety and developed crash testing standards that incentivized safety features like airbags and seatbelts -- years before government regulation mandated them. Major AI companies already using the new insurance model AIUC has already begun working with several high-profile AI companies to validate its approach. The company has certified AI agents for unicorn startups Ada (customer support) and Cognition (coding), and helped unlock enterprise deals that had been stalled due to trust concerns. "Ada, we help them unlock a deal with the top five social media company where we will came in and ran independent tests on the risks that this company cared about, and that helped unlock that deal, basically giving them the confidence that this could actually be shown to their customers," Kvist said. The startup is also developing partnerships with established insurance providers, including Lloyd's of London, the world's oldest insurance market, to provide the financial backing for policies. This addresses a key concern about trusting a startup with major liability coverage. "The insurance policies are going to be backed by the balance sheets of the big insurers," Kvist explained. "So for example, when we work with Lloyd's of London, the world's oldest insurer, they've never failed to pay a claim, and the insurance policy ultimately comes from them." Quarterly updates vs. years-long regulatory cycles One of AIUC's key innovations is designing standards that can keep pace with AI's breakneck development speed. While traditional regulatory frameworks like the EU AI Act take years to develop and implement, AIUC plans to update its standards quarterly. "The EU AI Act was started back in 2021, they're now about to release it, but they're pausing it again because it's too onerous four years later," Kvist noted. "That cycle makes it very hard to get the legacy regulatory process to keep up with this technology." This agility has become increasingly important as the competitive gap between US and Chinese AI capabilities narrows. "A year and a half ago, everyone would say, like, we're two years ahead now, that sounds like eight months, something like that," Kvist observed. How AI insurance actually works: testing systems to breaking point AIUC's insurance policies cover various types of AI failures, from data breaches and discriminatory hiring practices to intellectual property infringement and incorrect automated decisions. The company prices coverage based on extensive testing that attempts to break AI systems thousands of times across different failure modes. "For some of the other things, we think it's interesting to you. Or not wait for a lawsuit. So for example, if you issue an incorrect refund, great, well, the price of that is obvious, is the amount of money that you incorrectly refunded," Kvist explained. The startup works with a consortium of partners including PwC (one of the "Big Four" accounting firms), Orrick (a leading AI law firm), and academics from Stanford and MIT to develop and validate its standards. Former Anthropic executive leaves to solve AI trust problem The founding team brings deep experience from both AI development and institutional risk management. Kvist was the first product and go-to-market hire at Anthropic in early 2022, before ChatGPT's launch, and sits on the board of the Center for AI Safety. Co-founder Brandon Wang is a Thiel Fellow who previously built consumer underwriting businesses, while Rajiv Dattani is a former McKinsey partner who led global insurance work and served as COO of METR, a nonprofit that evaluates leading AI models. "The question that really interested me is: how, as a society, are we going to deal with this technology that's washing over us?" Kvist said of his decision to leave Anthropic. "I think building AI, which is what Anthropic is doing, is very exciting and will do a lot of good for the world. But the most central question that gets me up in the morning is: how, as a society, are we going to deal with this?" The race to make AI safe before regulation catches up AIUC's launch signals a broader shift in how the AI industry approaches risk management as the technology moves from experimental deployments to mission-critical business applications. The insurance model offers enterprises a path between the extremes of reckless AI adoption and paralyzed inaction while waiting for comprehensive government oversight. The startup's approach could prove crucial as AI agents become more capable and widespread across industries. By creating financial incentives for responsible development while enabling faster deployment, companies like AIUC are building the infrastructure that could determine whether artificial intelligence transforms the economy safely or chaotically. "We're hoping that this insurance model, this market-based model, both incentivizes fast adoption and investment in security," Kvist said. "We've seen this throughout history -- that the market can move faster than legislation on these issues." The stakes couldn't be higher. As AI systems edge closer to human-level reasoning across more domains, the window for building robust safety infrastructure may be rapidly closing. AIUC's bet is that by the time regulators catch up to AI's breakneck pace, the market will have already built the guardrails. After all, Philadelphia's fires didn't wait for government building codes -- and today's AI arms race won't wait for Washington either.
[2]
Exclusive: Who covers the damage when an AI agent goes rogue? This startup has an insurance policy for that
That's right: Insurance policies for AI agents. AIUC cofounder and CEO Rune Kvist says that insurance for agents -- that is, autonomous AI systems capable of making decisions and taking action without constant human oversight -- is about to be big business. Previously the first product and go-to-market hire at Anthropic in 2022, Kvist's founding team also includes CTO Brandon Wang, a Thiel Fellow who previously founded a consumer underwriting business, and Rajiv Dattani a former McKinsey partner who led work in the global insurance sector, and was COO of METR, a research non-profit that evaluated OpenAI and Anthropic's models before deployment. At the heart of AIUC's approach is a new risk and safety framework called AIUC-1, designed specifically for AI agents. It pulls together existing standards like the NIST AI Risk Management Framework, the EU AI Act, and MITRE's ATLAS threat model -- then layers on auditable, agent-specific safeguards. The idea is simple: make it easy for enterprises to adopt AI agents with the same kind of trust signals they expect in cloud security or data privacy. "The important thing about insurance is that it creates financial incentives to reduce the risk," Kvist told Fortune. "That means that we're going to be tracking, where does it go wrong, what are the problems you're solving. And insurers can often enforce that you do take certain steps in order to get certified." While there other startups also currently working on AI insurance products, Kvist said none are building the kind of agent standard that prevents risks like AIUC-1. "Insurance & standards go hand-in-hand to create confidence around AI adoption," he said. "AIUC-1 creates a standard for AI adoption," said John Bautista, partner at law firm Orrick and who helped create the standard. "As businesses enter a brave new world of AI, there's a ton of legal ambiguities that hold up adoption. With new laws and frameworks constantly emerging, companies need one clear standard that pulls it all together and makes adoption massively simple," he said. The story of American progress, he added, is also a story of insurance. Benjamin Franklin founded the country's first mutual fire insurance company in response to devastating house fires. In the 20th century, specialized players like UL Labs emerged from the insurance industry to test the safety of electric appliances. Car insurers built crash-test standards that gave birth to the modern auto industry. AIUC is betting that history is about to repeat. "It's not Toyota that does the car crash testing, it's independent bodies." Kvist pointed out. "I think there's a need for an independent ecosystem of companies that are answering [the question], can we trust these AI agents?" To make that happen, AIUC will offer a trifecta: standards, audits, and liability coverage. The AIUC-1 framework creates a technical and operational baseline. Independent audits test real-world performance -- by trying to get agents to fail, hallucinate, leak data, or act dangerously. And insurance policies cover customers and vendors in the event an agent causes harm, with pricing that reflects how safe the system is. If an AI sales agent accidentally exposes customer personally identifiable information, for example, or if an AI assistant in finance fabricates a policy or misquotes tax information, this type of insurance policy could cover the fallout. The financial incentive, Kvist explained, is the point. Just like consumers get a better car insurance rate for having airbags and anti-lock brakes, AI systems that pass the AIUC-1 audit could get better terms on insurance, in Kvist's view. That pushes AI vendors toward better practices, faster -- and gives enterprises a concrete reason to adopt sooner, before their competitors do. AIUC's view is that the market, not just government, can drive responsible development. Top-down regulation is "hard to get right," said Kvist. But leaving it all to companies like OpenAI, Anthropic and Google doesn't work either -- voluntary safety commitments are already being walked back. Insurance creates a third way to align incentives and evolves with the technology, he explained. Kvist likens AIUC-1 to SOC-2, the security certification standard that gave startups a way to signal trust to enterprise buyers. He imagines a world in which AI agent liability insurance becomes as common -- and necessary -- as cyber insurance is today, predicting a $500 billion market by 2030, eclipsing even cyber insurance. AIUC is already working with several enterprise customers and insurance partners (AIUC said it could disclose the names yet), and is moving quickly to become the industry benchmark for AI agent safety. Investors like Nat Friedman agree. As the former CEO of GitHub, Friedman saw the trust issues firsthand when launching GitHub Copilot. "All his customers were wary of adopting it," Kvist recalls. "There were all these IP risks." As a result, Friedman had been looking for an AI insurance startup for a couple of years. After a 90-minute pitch meeting, he said he wanted to invest -- which he did, in a seed round in June, before Friedman moved to join Alexandr Wang at Mark Zuckerberg's new Meta Superintelligence Labs. In a few years, said Kvist, insuring AI agents will be mainstream. "These agents are making a much bigger promise, which is 'we're going to do the work for you,'" he said. "We think the liability becomes much bigger, and therefore the interest is much bigger."
[3]
What If AI Goes Rogue? This Startup Wants to Write the Insurance Policy
An early Anthropic employee has launched an AI insurance firm to help companies deploy artificial intelligence systems while mitigating the catastrophic risks of rogue AI models. The Artificial Intelligence Underwriting Company, which came out of stealth on Wednesday, was launched by co-founders Rune Kvist, formerly of Anthropic; Brandon Wang, a former Thiel fellow; and Rajiv Dattani, a former McKinsey insurance partner. The trio raised $15 million, primarily from former GitHub CEO Nat Friedman through his firm, NFDG. Friedman was responsible for $10 million of that investment, with the remaining third coming from Emergence, Terrain, and angel investors like Benjamin Mann of Anthropic. A top-down approach to AI regulation, Kvist tells Inc., is too slow to keep up with how the technology is evolving. "The EU Act started drafting in 2021. So much has changed since then," he says. Instead, AIUC is creating both the safety standards and underwriting AI insurance for companies that are implementing the technology. "You wouldn't rely on Goldman Sachs to say whether their bonds are worthwhile; you look to Standard & Poor," he says. This independent perspective, Kvist says, is essential to accurately assessing the safety of AI models and their implementation. For companies that meet AIUC's safety standards, they partner with some of the largest insurance companies to provide an insurance policy. A retailer, Kvist says, may take out a policy to cover cases in which its AI chatbots go rogue and misinform users about its returns policies. An insurance-based approach, he adds, helps untangle one of the most pressing questions around AI safety: Who is responsible when the machine goes awry? The team, which officially launched in December 2024 and had been operating in stealth until this week, currently consists of five people. But they intend to double headcount over the next year. "We're trying to keep things tight," Kvist says. In a paper released earlier this month, Kvist, Dattani and Wang outlined their philosophy for how insurance can help hasten AI progress while managing the risks artificial intelligence poses to society. "We're navigating a tightrope as Superintelligence nears," the trio wrote, referring to the advancement of artificial intelligence. "If the west slows down unilaterally, China could dominate the 21st century. If we accelerate recklessly, accidents will halt progress." AI regulation, however, according to the trio, is moving too slowly and is too much of a patchwork to be effective in mitigating the risks of the technology. Instead, they say "insurers are incentivized to develop and quickly iterate on core safety measures" and therefore could help lead AI regulation efforts. However, AIUC isn't an alternative to government regulation. Kvist still believes governments should play a key role in regulating larger-scale AI risks, like that posed by deepfakes, AI's ability to create bioweapons, or other catastrophic risks. "There's no substitute to government regulation," says Timnit Gebru, the founder of the Distributed AI Research Institute. "Insurance companies should be more attuned to the risks," she adds, but Gebru worries that the rapid pace of AI adoption is at odds with reducing AI harms. Kvist estimates enterprise adoption of AI is set to hit $500 billion in the next five years. As it does, the market for insurance providers is set to grow exponentially. "We're in uncharted territory without a parallel in human history," he says. "We're going to need to act with confidence around all the ambiguity and put numbers on the risks. The people you can trust to put numbers on it are those with skin in the game, who pay if we get the numbers wrong." The final deadline for the 2025 Inc. Power Partner Awards is tonight, July 25, at 11:59 p.m. PT. Apply now.
Share
Copy Link
The Artificial Intelligence Underwriting Company (AIUC) raises $15 million to provide insurance and safety standards for AI agents, addressing enterprise concerns about AI deployment risks.
The Artificial Intelligence Underwriting Company (AIUC), a startup founded by former Anthropic executive Rune Kvist, has emerged from stealth mode with a $15 million seed funding round led by Nat Friedman, former GitHub CEO 1. The company aims to address one of the most pressing challenges facing enterprises today: how to deploy artificial intelligence systems without risking catastrophic failures that could damage their businesses 1.
Source: Fortune
AIUC's innovative approach combines insurance coverage with rigorous safety standards and independent audits to give companies confidence in deploying AI agents - autonomous software systems that can perform complex tasks like customer service, coding, and data analysis 1. The company has developed a comprehensive security and risk framework specifically designed for artificial intelligence systems, which Kvist calls "SOC 2 for AI agents" 1.
At the heart of AIUC's approach is a new risk and safety framework called AIUC-1, designed specifically for AI agents 2. This standard addresses six key categories: safety, security, reliability, accountability, data privacy, and societal risks 1. It pulls together existing standards like the NIST AI Risk Management Framework, the EU AI Act, and MITRE's ATLAS threat model, then layers on auditable, agent-specific safeguards 2.
AIUC's framework requires AI companies to implement specific safeguards, from monitoring systems to incident response plans, that can be independently verified through rigorous testing 1. The company conducts extensive tests on AI agents, attempting to provoke failures such as racist statements, unauthorized refunds, or data leaks, to assess the robustness of the AI systems 1.
Source: VentureBeat
AIUC's approach draws on historical precedents where private markets moved faster than regulation to enable the safe adoption of transformative technologies 1. Kvist argues that this insurance-centered model creates financial incentives to reduce risks and can evolve more quickly than government regulation 23.
AIUC has already begun working with several high-profile AI companies to validate its approach. The company has certified AI agents for unicorn startups Ada (customer support) and Cognition (coding), helping to unlock enterprise deals that had been stalled due to trust concerns 1. AIUC is also developing partnerships with established insurance providers, including Lloyd's of London, to provide financial backing for policies 1.
Kvist predicts that insuring AI agents will become mainstream in the near future, estimating a $500 billion market by 2030 23. The company's approach is seen as a potential solution to align incentives and evolve with the technology, creating a third way between top-down regulation and voluntary safety commitments by major AI companies 2.
While AIUC's approach has garnered significant investment and interest, some experts caution that it should not be seen as a substitute for government regulation 3. Timnit Gebru, founder of the Distributed AI Research Institute, warns that the rapid pace of AI adoption may be at odds with reducing AI harms 3.
Source: Inc. Magazine
As AI continues to advance rapidly, AIUC's innovative approach to risk management and insurance could play a crucial role in building trust and enabling safer deployment of AI technologies across various industries.
Summarized by
Navi
[1]
Google releases Gemini 2.5 Deep Think, an advanced AI model designed for complex queries, available exclusively to AI Ultra subscribers at $250 per month. The model showcases improved performance in various benchmarks and introduces parallel thinking capabilities.
17 Sources
Technology
14 hrs ago
17 Sources
Technology
14 hrs ago
OpenAI raises $8.3 billion in a new funding round, valuing the company at $300 billion. The AI giant's rapid growth and ambitious plans attract major investors, signaling a significant shift in the AI industry landscape.
10 Sources
Business and Economy
7 hrs ago
10 Sources
Business and Economy
7 hrs ago
Reddit's Q2 earnings reveal significant growth driven by AI-powered advertising tools and data licensing deals, showcasing the platform's successful integration of AI technology.
7 Sources
Business and Economy
14 hrs ago
7 Sources
Business and Economy
14 hrs ago
Reddit is repositioning itself as a search engine, integrating its traditional search with AI-powered Reddit Answers to create a unified search experience. The move comes as the platform sees increased user reliance on its vast community-generated content for information.
9 Sources
Technology
22 hrs ago
9 Sources
Technology
22 hrs ago
OpenAI is poised to launch GPT-5, a revolutionary AI model that promises to unify various AI capabilities and automate model selection for optimal performance.
2 Sources
Technology
14 hrs ago
2 Sources
Technology
14 hrs ago