3 Sources
3 Sources
[1]
The multi-billion AI security problem enterprises can't ignore
AI agents are supposed to make work easier. But they're also creating a whole new category of security nightmares. As companies deploy AI-powered chatbots, agents, and copilots across their operations, they're facing a new risk: how do you let employees and AI agents use powerful AI tools without accidentally leaking sensitive data, violating compliance rules, or opening the door to prompt-based injections? Witness AI just raised $58 million to find a solution, building what they call "the confidence layer for enterprise AI." Today on TechCrunch's Equity podcast, Rebecca Bellan was joined by Barmak Meftah, co-founder and partner at Ballistic Ventures, and Rick Caccia, CEO of Witness AI, to discuss what enterprises are actually worried about, why AI security become an $800 billion to $1.2 trillion market by 2031, and what happens when AI agents start talking to other AI agents without human oversight.
[2]
Exclusive: WitnessAI nabs $58M to secure enterprise AI
Why it matters: Securing agents is the next battleground in cybersecurity, with defenders racing to lock down non-human identities before hackers get hold of them. Zoom in: WitnessAI offers an AI security and governance platform that helps companies control what data flows into internal AI tools, including how AI agents ingest and move through corporate systems. * The platform also helps companies monitor and control these activities based on whatever security and data privacy compliance requirements they have to follow. * "People are using other people's AI, customers are using company chatbots, agents are coming along -- all of that problem is going to bleed together," CEO Rick Caccia told Axios. "It's going to be like 'people talking to agents, to apps, to models' and so we built a product that kind of covers all of it." Between the lines: The company, which emerged from stealth in 2024, is betting that agent security will soon be unavoidable as enterprises push toward autonomy. * WitnessAI already works with top global airlines, automakers, financial services firms, utilities and telcos, Caccia said. * "If we don't enable the safe use of AI and agents, we should expect data loss and manipulation on a scale we've never seen," Nicole Perlroth, founding partner of Silver Buckshot Ventures, which participated in the round, told Axios. The big picture: Investors are piling into AI security startups focused on agentic threats. * PitchBook estimates that nearly $250 million was raised for agentic cybersecurity companies last year, as of Dec. 15, across almost two dozen deals. What they're saying: "If adversaries are going to look at being able to get into your network, your data, your most sensitive information, they're going to come after agents," Gen. Paul Nakasone, former head of the NSA and Cyber Command and a board member at WitnessAI, told Axios. Reality check: Agentic adoption is still early. Around one-in-four respondents in a recent McKinsey study said their organizations were scaling agentic AI systems meaningfully. What's next: Caccia is eyeing international expansion, including potential distribution deals with managed service providers and internet providers. Go deeper: Securing AI agents is bringing in big money
[3]
WitnessAI debuts Agentic Security as enterprises deploy more autonomous AI agents - SiliconANGLE
WitnessAI debuts Agentic Security as enterprises deploy more autonomous AI agents Artificial intelligence safety platform startup WitnessAI Inc. today announced the launch of Agentic Security, which extends its "confidence layer" for enterprise artificial intelligence beyond large language models to cover autonomous and semi-autonomous AI agents. The new capability is designed to give security teams visibility, control and protection as organizations increasingly deploy agents that can access internal systems, external application programming interfaces and third-party tools. The Agentic Security capability seeks to address the issue whereby employees and engineering teams are deploying agents that accept natural language inputs and execute backend tasks through tool calls. The problem is that security teams lack visibility into which agents exist, which tools they access and how to protect them from adversarial manipulation. WitnessAI argues that agents introduce a fundamentally different attack surface than traditional AI chat interfaces. Because agents can act automatically on every permission they are granted, a single malicious or manipulated prompt can cascade through tool calls, API requests and database queries before a human notices, creating risks including prompt injection, data exfiltration and unauthorized actions that conventional network and endpoint security tools were not built to detect. Agentic Security is aimed at addressing the security gap by automatically discovering agentic activity across high-risk environments such as Claude Desktop and plugins, ChatGPT with enabled plugins, VS Code with AI extensions and locally running agent frameworks including LangChain, LlamaIndex, CrewAI and AutoGPT. The platform can distinguish standard chat sessions from agentic sessions by analyzing tool usage in traffic payloads and maps which public and private Model Context Protocol servers are being accessed to enrich them with intent and functional classifications. The platform also focuses on accountability by linking agent actions back to the humans who initiated them. Agentic Security connects human and agent identities and captures execution context at runtime to allow security teams to see who triggered a workflow and what the agent attempted to do, even when agents communicate with other agents. The ability to find attribution allows for a single policy framework and audit trail across both human and agentic workforces. Other features of the new offering include bidirectional runtime defense, with prompts scanned before execution to detect and block prompt injection, jailbreaking and role-playing attacks, while sensitive data such as credentials and personally identifiable information is tokenized in real time. Agent outputs are also inspected before delivery to enforce policy compliance and filter harmful content, with enforcement based on behavioral intent rather than simple keyword matching. "AI workflows are maturing and starting to cross corporate and cloud LLMs, bots and agents," said Chief Executive Rick Caccia. "We are the only AI security vendor that can secure every AI interaction, everywhere, with a unified solution. The alternative is trying to stitch together secure workflows using network proxies, firewalls, data loss protection products and extended detection and response agents. In short, the alternative is a complex mess." WitnessAI is a venture capital-backed startup that has raised a single round of $27.5 million in May 2024. Investors in the company include Google Ventures (GV Management Co.) and Ballistic Ventures.
Share
Share
Copy Link
WitnessAI just raised $58 million to build what it calls a 'confidence layer for enterprise AI' as companies deploy AI-powered chatbots, agents, and copilots across operations. The startup addresses how enterprises can use powerful AI tools without accidentally leaking sensitive data, violating compliance rules, or opening the door to prompt-based injections—risks that could turn AI security into an $800 billion to $1.2 trillion market by 2031.
As enterprises accelerate their adoption of AI agents, chatbots, and copilots, a critical question looms: how do organizations harness these powerful tools without exposing themselves to data breaches, compliance violations, and novel attack vectors? WitnessAI has emerged from stealth with a $58 million funding round to tackle this exact challenge, building what CEO Rick Caccia describes as a confidence layer for enterprise AI
1
2
. The investment, backed by Silver Buckshot Ventures, Ballistic Ventures, and others, positions the startup at the forefront of what investors believe could become an $800 billion to $1.2 trillion market by 20311
.
Source: Axios
The AI security problem enterprises face today differs fundamentally from traditional cybersecurity threats. AI agents don't just respond to queries—they act autonomously on permissions granted to them, accessing internal systems, external APIs, and third-party tools with minimal human oversight. A single malicious or manipulated prompt can cascade through tool calls, API requests, and database queries before anyone notices, creating risks that conventional network and endpoint security tools weren't built to detect
3
.
Source: TechCrunch
WitnessAI's newly launched Agentic Security capability extends its AI security and governance platform beyond large language models to cover autonomous and semi-autonomous AI agents
3
. The platform automatically discovers agentic activity across high-risk environments including Claude Desktop, ChatGPT with enabled plugins, VS Code with AI extensions, and locally running agent frameworks like LangChain, LlamaIndex, CrewAI, and AutoGPT3
.
Source: SiliconANGLE
What sets this approach apart is its ability to distinguish standard chat sessions from agentic sessions by analyzing tool usage in traffic payloads. The platform maps which public and private Model Context Protocol servers are being accessed, enriching them with intent and functional classifications
3
. This visibility addresses a critical gap: security teams currently lack insight into which agents exist, which tools they access, and how to protect them from adversarial manipulation.The platform's bidirectional runtime defense scans prompts before execution to detect and block prompt injection, jailbreaking, and role-playing attacks. Sensitive data such as credentials and personally identifiable information is tokenized in real time to prevent data exfiltration
3
. Agent outputs are inspected before delivery to enforce policy compliance and filter harmful content, with enforcement based on behavioral intent rather than simple keyword matching.Caccia emphasized the scope of the challenge to Axios: "People are using other people's AI, customers are using company chatbots, agents are coming along—all of that problem is going to bleed together. It's going to be like 'people talking to agents, to apps, to models' and so we built a product that kind of covers all of it"
2
. WitnessAI already works with top global airlines, automakers, financial services firms, utilities, and telecommunications companies2
.The urgency around securing AI agents stems from their expanding role in corporate operations. Nicole Perlroth, founding partner of Silver Buckshot Ventures, warned: "If we don't enable the safe use of AI and agents, we should expect data loss and manipulation on a scale we've never seen"
2
. Gen. Paul Nakasone, former head of the NSA and Cyber Command and a board member at WitnessAI, added that adversaries looking to access networks, data, and sensitive information will target agents as entry points2
.Investors are responding accordingly. PitchBook estimates that nearly $250 million was raised for agentic cybersecurity companies last year across almost two dozen deals as of December 15
2
. This surge reflects growing recognition that securing AI agents represents the next battleground in cybersecurity, with defenders racing to lock down non-human identities before hackers exploit them.Related Stories
Agentic Security connects human and agent identities, capturing execution context at runtime so security teams can see who triggered a workflow and what the agent attempted to do—even when agents communicate with other agents
3
. This attribution capability enables a single policy framework and audit trail across both human and agentic workforces, addressing governance requirements that become more complex as autonomy increases.The platform helps companies control what data flows into internal AI tools, including how AI agents ingest and move through corporate systems, while monitoring these activities based on security and data privacy compliance requirements
2
. Caccia positioned WitnessAI as the only vendor capable of securing every AI interaction everywhere with a unified solution, contrasting this with the alternative of stitching together secure workflows using network proxies, firewalls, data loss protection products, and extended detection and response agents—"a complex mess"3
.While agentic adoption remains early—around one-in-four respondents in a recent McKinsey study said their organizations were scaling agentic AI systems meaningfully
2
—the trajectory is clear. Caccia is eyeing international expansion, including potential distribution deals with managed service providers and internet providers2
. As AI workflows mature and cross corporate and cloud large language models, bots, and agents, the question of how to prevent compliance breaches while enabling innovation will only intensify. Organizations should watch how quickly agent-to-agent communication becomes standard practice, as this scenario introduces compounding risks that demand proactive security measures rather than reactive responses.Summarized by
Navi
30 Jul 2025•Technology

15 Oct 2025•Technology

11 Nov 2025•Technology

1
Policy and Regulation

2
Technology

3
Policy and Regulation
