2 Sources
2 Sources
[1]
Irregular raises $80 million to secure frontier AI models
On Wednesday, AI security firm Irregular announced $80 million in new funding in a round led by Sequoia Capital and Redpoint Ventures, with participation from Wiz CEO Assaf Rappaport. A source close to the deal said the round valued Irregular at $450 million. "Our view is that soon, a lot of economic activity is going to come from human-on-AI interaction and AI-on-AI interaction," co-founder Dan Lahav told TechCrunch, "and that's going to break the security stack along multiple points." Formerly known as Pattern Labs, Irregular is already a significant player in AI evaluations. The company's work is cited in security evaluations for Claude 3.7 Sonnet as well as OpenAI's o3 and o4-mini models. More generally, the company's framework for scoring a model's vulnerability-detection ability (dubbed SOLVE) is widely used within the industry. While Irregular has done significant work on models' existing risks, the company is fundraising with an eye towards something even more ambitious: spotting emergent risks and behaviors before they surface in the wild. The company has constructed an elaborate system of simulated environments, enabling intensive testing of a model before it is released. "We have complex network simulations where we have AI both taking the role of attacker and defender," says co-founder Omer Nevo. "So when a new model comes out, we can see where the defenses hold up and where they don't." Security has become a point of intense focus for the AI industry, as the potential risks posed by frontier models as more risks have emerged. OpenAI overhauled its internal security measures this summer, with an eye towards potential corporate espionage. At the same time, AI models are increasingly adept at finding software vulnerabilities -- a power with serious implications for both attackers and defenders. For the Irregular founders, it's the first of many security headaches caused by the growing capabilities of large language models. "If the goal of the frontier lab is to create increasingly more sophisticated and capable models, our goal is to secure these models," Lahav says. "But it's a moving target, so inherently there's much, much, much more work to do in the future."
[2]
Irregular raises $80M to set AI security standards for frontier models - SiliconANGLE
Irregular raises $80M to set AI security standards for frontier models Artificial intelligence security lab startup Irregular (formerly Pattern Labs Inc.) announced today that it had raised $80 million in new funding to build its defensive systems, testing infrastructure and security tools to help vet and harden next-generation AI models for safe deployment. Founded in 2023, Irregular pitches itself as the world's first frontier AI security lab and is devoted to securing advanced AI systems before they can be misused. The company has as its mission to test, harden and defend next-generation AI models by running them through adversarial and red-teaming environments, in partnership with the world's leading AI developers. Irregular runs controlled simulations of cutting-edge AI models to probe how those models might be exploited. The simulations explore threat scenarios such as antivirus evasion, autonomous offensive behavior, system infiltration, or other misuse vectors, measuring both how the AI could carry out an attack and how resilient it is when under counterattack. Along with testing, Irregular also offers defensive tools, frameworks and scoring systems that guide how AI systems should be secured in practice. Irregular partners with leading AI labs and government institutions, embedding its testing work into the lifecycles of major frontier models. The collaboration allows it to "see around corners" - anticipating threats before they materialize in deployed systems and advising on security roadmaps, compliance, and deployment policy. The company is already working on shaping industry standards, with its evaluations cited in OpenAI's system cards for GPT-4, o3, o4 mini and GPT-5; the UK government and Anthropic PBC use Irregular's SOLVE framework, with Anthropic using it to vet cyber risks in Claude 4; and Google DeepMind researchers recently cited the company in a paper on the evaluation of emerging cyberattack capabilities of AI. The company also co-authored a whitepaper with Anthropic presenting a novel approach to using Confidential Computing technologies to enhance the security of AI model weights and user data privacy and co-authored with RAND Corp. a joint seminal paper on AI model theft and misuse, helping shape Europe's policy discussions on AI security and setting a benchmark for the field. "Irregular has taken on an ambitious mission to make sure the future of AI is secure as it is powerful," explains Dan Lahav, co-founder and chief executive officer of Irregular. "AI capabilities are advancing at breakneck speed; we're building the tools to test the most advanced systems way before public release and to create the mitigations that will shape how AI is deployed responsibly at scale." The funding round was led by Sequoia Capital Operations and Redpoint Ventures LP, with Swish Ventures and notable angel investors including Wiz Inc. CEO Assaf Rappaport and E.ON SE CEO Ofir Ehrlich also participating. "The real AI security threats haven't emerged yet," said Shaun Maguire, partner at Sequoia Capital. "What stood out about the Irregular team is how far ahead they're thinking. They're working with the most advanced models being built today and laying the groundwork for how we'll need to make AI reliable in the years ahead."
Share
Share
Copy Link
AI security firm Irregular secures $80 million in funding to advance its mission of protecting and evaluating frontier AI models. The company's innovative approach to AI security testing and evaluation is gaining recognition in the industry.
Irregular, formerly known as Pattern Labs, has raised $80 million in a funding round led by Sequoia Capital and Redpoint Ventures, with participation from notable investors including Wiz CEO Assaf Rappaport
1
2
. This significant investment values the AI security firm at $450 million, according to a source close to the deal1
.Source: TechCrunch
Irregular positions itself as the world's first frontier AI security lab, focusing on securing advanced AI systems before they can be misused
2
. The company's approach involves:2
Co-founder Dan Lahav emphasizes the company's mission: "Our view is that soon, a lot of economic activity is going to come from human-on-AI interaction and AI-on-AI interaction, and that's going to break the security stack along multiple points."
1
Irregular has already established itself as a significant player in AI evaluations:
1
1
2
2
Related Stories
Irregular is not just focused on existing risks but is also working on anticipating future threats:
1
1
1
2
Irregular's expertise is increasingly recognized in the AI security field:
2
2
2
As AI capabilities advance rapidly, Irregular's work becomes increasingly crucial. Shaun Maguire, partner at Sequoia Capital, notes, "The real AI security threats haven't emerged yet. What stood out about the Irregular team is how far ahead they're thinking."
2
Summarized by
Navi