Irregular Raises $80 Million to Secure Frontier AI Models

Reviewed byNidhi Govil

2 Sources

Share

AI security firm Irregular secures $80 million in funding to advance its mission of protecting and evaluating frontier AI models. The company's innovative approach to AI security testing and evaluation is gaining recognition in the industry.

Irregular Secures $80 Million to Advance AI Security

Irregular, formerly known as Pattern Labs, has raised $80 million in a funding round led by Sequoia Capital and Redpoint Ventures, with participation from notable investors including Wiz CEO Assaf Rappaport

1

2

. This significant investment values the AI security firm at $450 million, according to a source close to the deal

1

.

Source: TechCrunch

Source: TechCrunch

Mission and Approach

Irregular positions itself as the world's first frontier AI security lab, focusing on securing advanced AI systems before they can be misused

2

. The company's approach involves:

  1. Running controlled simulations of cutting-edge AI models to probe potential exploits
  2. Exploring threat scenarios such as antivirus evasion, autonomous offensive behavior, and system infiltration
  3. Measuring both offensive capabilities and defensive resilience of AI models

    2

Co-founder Dan Lahav emphasizes the company's mission: "Our view is that soon, a lot of economic activity is going to come from human-on-AI interaction and AI-on-AI interaction, and that's going to break the security stack along multiple points."

1

Industry Impact and Partnerships

Irregular has already established itself as a significant player in AI evaluations:

  1. Their work is cited in security evaluations for Claude 3.7 Sonnet and OpenAI's o3 and o4-mini models

    1

  2. The company's SOLVE framework for scoring model vulnerability-detection ability is widely used in the industry

    1

    2

  3. Irregular partners with leading AI labs and government institutions, embedding its testing work into the lifecycles of major frontier models

    2

Future-Focused Security Measures

Irregular is not just focused on existing risks but is also working on anticipating future threats:

  1. The company has constructed elaborate simulated environments for intensive pre-release testing of models

    1

  2. These simulations include complex network scenarios where AI takes on both attacker and defender roles

    1

  3. This approach allows Irregular to spot emergent risks and behaviors before they surface in real-world applications

    1

    2

Industry Recognition and Collaborations

Irregular's expertise is increasingly recognized in the AI security field:

  1. The UK government and Anthropic use Irregular's SOLVE framework, with Anthropic applying it to vet cyber risks in Claude 4

    2

  2. Google DeepMind researchers have cited the company in a paper on evaluating emerging cyberattack capabilities of AI

    2

  3. Irregular has co-authored whitepapers with Anthropic on enhancing AI model security and with RAND Corp. on AI model theft and misuse

    2

As AI capabilities advance rapidly, Irregular's work becomes increasingly crucial. Shaun Maguire, partner at Sequoia Capital, notes, "The real AI security threats haven't emerged yet. What stood out about the Irregular team is how far ahead they're thinking."

2

Explore today's top stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo