WitnessAI raises $58 million to solve the multi-billion AI security problem enterprises face

3 Sources

Share

WitnessAI just raised $58 million to build what it calls a 'confidence layer for enterprise AI' as companies deploy AI-powered chatbots, agents, and copilots across operations. The startup addresses how enterprises can use powerful AI tools without accidentally leaking sensitive data, violating compliance rules, or opening the door to prompt-based injections—risks that could turn AI security into an $800 billion to $1.2 trillion market by 2031.

WitnessAI Secures $58 Million to Address Growing AI Security Challenges

As enterprises accelerate their adoption of AI agents, chatbots, and copilots, a critical question looms: how do organizations harness these powerful tools without exposing themselves to data breaches, compliance violations, and novel attack vectors? WitnessAI has emerged from stealth with a $58 million funding round to tackle this exact challenge, building what CEO Rick Caccia describes as a confidence layer for enterprise AI

1

2

. The investment, backed by Silver Buckshot Ventures, Ballistic Ventures, and others, positions the startup at the forefront of what investors believe could become an $800 billion to $1.2 trillion market by 2031

1

.

Source: Axios

Source: Axios

The AI security problem enterprises face today differs fundamentally from traditional cybersecurity threats. AI agents don't just respond to queries—they act autonomously on permissions granted to them, accessing internal systems, external APIs, and third-party tools with minimal human oversight. A single malicious or manipulated prompt can cascade through tool calls, API requests, and database queries before anyone notices, creating risks that conventional network and endpoint security tools weren't built to detect

3

.

Source: TechCrunch

Source: TechCrunch

Introducing Agentic Security for Autonomous AI Agents

WitnessAI's newly launched Agentic Security capability extends its AI security and governance platform beyond large language models to cover autonomous and semi-autonomous AI agents

3

. The platform automatically discovers agentic activity across high-risk environments including Claude Desktop, ChatGPT with enabled plugins, VS Code with AI extensions, and locally running agent frameworks like LangChain, LlamaIndex, CrewAI, and AutoGPT

3

.

Source: SiliconANGLE

Source: SiliconANGLE

What sets this approach apart is its ability to distinguish standard chat sessions from agentic sessions by analyzing tool usage in traffic payloads. The platform maps which public and private Model Context Protocol servers are being accessed, enriching them with intent and functional classifications

3

. This visibility addresses a critical gap: security teams currently lack insight into which agents exist, which tools they access, and how to protect them from adversarial manipulation.

Securing AI Agents Against Prompt Injection and Data Exfiltration

The platform's bidirectional runtime defense scans prompts before execution to detect and block prompt injection, jailbreaking, and role-playing attacks. Sensitive data such as credentials and personally identifiable information is tokenized in real time to prevent data exfiltration

3

. Agent outputs are inspected before delivery to enforce policy compliance and filter harmful content, with enforcement based on behavioral intent rather than simple keyword matching.

Caccia emphasized the scope of the challenge to Axios: "People are using other people's AI, customers are using company chatbots, agents are coming along—all of that problem is going to bleed together. It's going to be like 'people talking to agents, to apps, to models' and so we built a product that kind of covers all of it"

2

. WitnessAI already works with top global airlines, automakers, financial services firms, utilities, and telecommunications companies

2

.

Why Enterprise AI Security Matters Now

The urgency around securing AI agents stems from their expanding role in corporate operations. Nicole Perlroth, founding partner of Silver Buckshot Ventures, warned: "If we don't enable the safe use of AI and agents, we should expect data loss and manipulation on a scale we've never seen"

2

. Gen. Paul Nakasone, former head of the NSA and Cyber Command and a board member at WitnessAI, added that adversaries looking to access networks, data, and sensitive information will target agents as entry points

2

.

Investors are responding accordingly. PitchBook estimates that nearly $250 million was raised for agentic cybersecurity companies last year across almost two dozen deals as of December 15

2

. This surge reflects growing recognition that securing AI agents represents the next battleground in cybersecurity, with defenders racing to lock down non-human identities before hackers exploit them.

Accountability and Governance Across Human and Agent Workforces

Agentic Security connects human and agent identities, capturing execution context at runtime so security teams can see who triggered a workflow and what the agent attempted to do—even when agents communicate with other agents

3

. This attribution capability enables a single policy framework and audit trail across both human and agentic workforces, addressing governance requirements that become more complex as autonomy increases.

The platform helps companies control what data flows into internal AI tools, including how AI agents ingest and move through corporate systems, while monitoring these activities based on security and data privacy compliance requirements

2

. Caccia positioned WitnessAI as the only vendor capable of securing every AI interaction everywhere with a unified solution, contrasting this with the alternative of stitching together secure workflows using network proxies, firewalls, data loss protection products, and extended detection and response agents—"a complex mess"

3

.

What Comes Next for AI Security

While agentic adoption remains early—around one-in-four respondents in a recent McKinsey study said their organizations were scaling agentic AI systems meaningfully

2

—the trajectory is clear. Caccia is eyeing international expansion, including potential distribution deals with managed service providers and internet providers

2

. As AI workflows mature and cross corporate and cloud large language models, bots, and agents, the question of how to prevent compliance breaches while enabling innovation will only intensify. Organizations should watch how quickly agent-to-agent communication becomes standard practice, as this scenario introduces compounding risks that demand proactive security measures rather than reactive responses.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo