7 Sources
[1]
Building agent-first governance and security
In some modern enterprises, non-human identities (NHI) are outpacing human identities, and that trend will explode with agentic AI. Solid governance and a fortified security foundation are therefore critical. According to the Deloitte AI Institute 2026 State of AI report, nearly 74% of companies plan to deploy agentic AI within two years. Yet only one in five (21%) reports having a mature model for governance of autonomous agents. Executives are most concerned with data privacy and security (73%); legal, intellectual property, and regulatory compliance (50%); followed closely by governance capabilities and oversight (46%). Enterprises may not even realize they are treating agents within their environment as first-class citizens with the keys to the kingdom, creating looming blind spots and potential points of exposure. What is needed is a robust control plane that governs, observes, and secures how AI agents, as well as their tools and models, operate across the enterprise. "A control plane is the shared, centralized layer governing who can run which agents, with which permissions, under which policies, and using which models and tools," according to Andrew Rafla, principal, Deloitte Cyber Practice. "Without a true control plane, you don't really have the ability to scale agents autonomously -- you just have unmanaged execution, and that comes with a lot of risk," he says. "If you can't answer what an agent did, on whose behalf, using what data, under what policy -- and whether you can reproduce or stop it -- you don't have a functional control plane." Governance must make those answers obvious, not aspirational, he says. Governance is what turns AI pilots into production use cases. It's the bridge that lets companies move from impressive experiments to safe, repeatable, enterprise-wide automation. Without governance, agent deployments don't fail safely. They fail unpredictably and at scale. Download the article.
[2]
77% of IT managers say their AI agents are out of control - 5 ways to rein in yours
AI agents -- so easy to spin up -- are proliferating out of everyone's control. And that's becoming a problem that may undermine any benefits they are delivering. That's the conclusion of a just-released survey by Rubrik ZeroLabs, which finds that fewer than one in four IT managers (23%) say they have "complete" control over the agents within their organizations. To make matters worse, these agents aren't necessarily delivering the productivity sought. A majority, 81%, report that the agents under their purview require more time in manual auditing and monitoring than they were intended to save via workflow improvements. Security is also less than stellar, the survey adds. Also: Scaling agentic AI demands a strong data foundation - 4 steps to take first Creating AI agents is easy, and the problem is "users often turn off VPNs or otherwise skirt security controls to spin up agents to act as assistants," the report's authors state. The result is a large volume of unsanctioned AI applications, both internally and launched by vendors. Across the industry, there is concern that agents are starting to get out of hand, with agent sprawl now a pervasive problem. "We are already seeing patterns similar to early cloud adoption, where teams spin up agents independently using different frameworks and vendors," said Kriti Faujdar, senior product manager at Microsoft. "This leads to fragmentation, inconsistent governance, and hidden security gaps." The authors of the ZeroLabs survey found a disconnect between perceived control and operational reality among agents. Just about all IT managers, 86%, anticipate that agentic proliferation will outpace security guardrails in the next year. More than half (52%) expect this to happen within the next six months. Plus, nearly all respondents indicate they lack the "undo" capabilities necessary to roll back unintended agent actions. Also: How to build better AI agents for your business - without creating trust issues With the proliferation of agents across enterprise systems, industry observers worry that such sprawl is becoming too difficult to manage and contain. "Any team with API access can spin up an agent in an afternoon," said Nik Kale, principal engineer with the Coalition for Secure AI. "Multiply that across a large enterprise, and you get hundreds of agents with overlapping permissions, no consistent identity model, and no one who can tell you the full inventory." Agentic observability can be notoriously challenging, and the ZeroLabs authors point to a growing need for telemetry for understanding chains of agentic actions, punctuated by enforcement points for security. Tracking agent viability means answering the following questions post-deployment, as identified by the ZeroLabs study's authors: These are questions that are currently not being answered, the report states. As a result, many administrators and their organizations are unable to "define acceptable agentic behavior; audit what resources and tools agents can access; create policies for triggering a human in the loop; or roll back agentic actions." As agents act autonomously, they pose a greater risk than traditional software, said Faujdar. In today's environment, there is a trade-off between speed and governance. "Organizations want to move fast, but without clear guardrails, they risk creating systems that are difficult to trust, audit, or scale. The winners will be those who treat agent management not as an afterthought, but as a first-class discipline." Keeping agents current is also a vexing challenge -- as their foundation models tend to drift. "The agent you certified in Q1 is behaviorally different by Q3, through no fault of the platform," said Renze Jongman, founder and CEO of Liberty91. "Your governance model has to assume the ground moves." Also: I asked 5 data leaders about how they use AI to automate - and end integration nightmares At this point, there are "too many agents operating outside any governance boundary, including the ones teams build themselves," said Kale, who advises keeping the orchestration layer in the agent stack separate from the model and governance layers. "If all three live inside one vendor's platform, you've handed over your agent's brain, its permissions, and its accountability chain in a single contract." Agent oversight, Kale added, "should involve security, architecture, and the business unit that owns the outcomes, not just the team that wants to ship the fastest."
[3]
The AI governance mirage: Why 72% of enterprises don't have the control and security they think they do
Decision makers at 72% of organizations claim to have two or more AI platforms that they identify as their "primary" layer, according to a survey of 40 enterprise companies conducted by VentureBeat last month, revealing real gaps in security and control. For enterprise management and technical leaders, and especially security leaders, these multiple AI platforms extend the attack surfaces of most enterprises at a time when AI-driven attacks have become increasingly potent. The multiple platforms -- which include offerings from hyperscaler or AI labs like Microsoft Azure, Google, OpenAI or Anthropic, or big application companies like Epic, Workday or ServiceNow -- reflect a state of sprawl that has emerged as these big software providers rush to offer their own AI to their enterprise customers. Those customers, in their own rush to scale AI, are finding they aren't building a singular strategy -- in fact they may be building a collection of contradictions. The strategic paradox: why leading enterprises are building around their vendors For example, take the strategic paradox faced by Mass General Brigham (MGB) hospital system, which has 90,000 employees and is the largest employer in Massachusetts. The hospital system last year had to shut down an uncontrolled number of internal proof of concepts that had sprouted up as employees had gotten carried away with AI projects, said CTO Nallan "Sri" Sriraman at the VentureBeat AI Impact event in Boston on March 26, which focused on the challenges of scaling AI. Instead, the company decided it was better to wait for the software giants it already uses to deliver on their AI roadmaps. Since these companies have so many resources, and were making AI a top priority themselves, it made no sense for MGB to try to build its own AI layer that would be duplicative, he said. "Why are we building it ourselves?" he asked. "Leverage it." Yet, even then, Sriraman's team has been forced to build workarounds, where those companies haven't done enough. For example, MGB has just completed a "full-scaled" custom build around Microsoft's Copilot -- to get essentially everything offered by that tool -- by putting a "skin" around Copilot to handle the safety and data privacy concerns the major model providers haven't yet mastered. Specifically, MGB needed a way for employees to prompt the AI and not have their protected health information (PHI) leaked back to the Copilot LLM provider, OpenAI. The new secure platform, which can support up to 30,000 users, is really the ultimate contradiction: Even though the company has a mandate to leverage the AI provided by the bigger companies, it needs to build around its failures. The contradiction goes even further. These software vendors used by MGB -- which also include Epic, Workday and ServiceNow -- are all now building agents for their AI, all operating differently. So MGB has to invest in building a "control plane that coordinates and orchestrates all of these agents," Sriraman said. "That's where our investment is going to be." He noted that companies like his are "discovering and experimenting as the landscape keeps shifting." The marketplace is "still nascent," he said, which makes decisions difficult. The "six blind men" problem Sriraman explained the current vendor landscape with an analogy: "When you ask six blind men to touch an elephant and say, what does this elephant look like?" Sriraman said. "You're gonna get six different answers." What emerges from the research VentureBeat conducted in the first quarter, along with conversations like the one in Boston, is a situation that we at VentureBeat are calling a "governance mirage." While many enterprises say they have adequate governance, in reality they haven't created clear accountability or specific guardrails, evaluations or security processes to ensure that governance. The data of disconnect: confidence vs. systematic oversight The research comes from surveys across January, February and March by VentureBeat of enterprise companies with 100 or more employees, with 40 to 70 qualified respondents per topic area -- covering agentic orchestration, AI security, RAG and governance. The data lacks statistical significance in many areas and should be treated as directional. The research on governance found that a majority, or 56%, of respondents said they are "very confident" that they'd detect a misbehaving AI model, suggesting that most decision-makers believe they have sufficient basic governance at their companies. However, nearly a third of respondents have no systematic mechanism to detect AI misbehavior until it surfaces through users or audits. In a world where telemetry leakage accounts for 34% of GenAI incidents (Wiz), and the global average breach cost has hit $4.4M (IBM 2025 Cost of a Data Breach), finding out after the damage is done is the default for too many companies. Moreover, 43% of respondents say a central team owns AI governance. That sounds reassuring -- until you look at what's happening everywhere else. Twenty-three percent say governance is unclear or actively contested between teams. Twenty percent say each platform team governs independently. Six percent say no one has formally addressed it. The rest said they were unsure who owned it. More telling is the barrier data. When asked about the single biggest obstacle to governing AI across platforms, "no single owner or accountable team" ranked second at 29% -- just behind vendor opacity. Accountability structure and lack of vendor transparency are the two dominant failure modes, and they compound each other: Without a central owner, no one has the mandate to demand transparency from the vendors. The day-two bill: managing sprawl, creep, and lock-in The scaling trap: Red Hat's warning Brian Gracely, Senior Director at Red Hat, who also spoke at the VentureBeat Boston event last month, addressed the infrastructure side of this sprawl, warning that many enterprises are falling into a trap of deceptive initial wins. Gracely noted that the barrier to entry is almost nonexistent at the start, with nearly anyone able to spin up a project using a credit card and an API key. "Day zero is very, very easy," Gracely said. "Day two is when the bill comes due." Red Hat is positioning its software layer (OpenShift AI) as the necessary buffer to prevent enterprises from getting buried in a single provider's proprietary ecosystem. Gracely's point is direct: If your control system is built entirely inside one cloud provider's toolset, you are effectively "renting a cage." The illusion of speed in the early pilot phase often hides a technical debt that becomes obvious the moment you try to move your AI work to a different platform. Gracely illustrated this with a recent example. A senior leader from Red Hat's centralized CTO office spent part of her vacation contributing to an open-source agent project called OpenClaw, which became widely popular in the first quarter. Within days of her name appearing as a project maintainer, Red Hat was fielding calls from major New York banks. Their problem was immediate: They realized they already had upwards of 10,000 employees bringing "claws" -- agent-based tools -- into their infrastructure with zero centralized oversight. Breaches caused by employees working on these sorts of unapproved technologies are costly. These so-called "shadow AI" incidents cost on average $670K more than standard incidents, according to IBM. Red Hat's Gracely noted that while organizations can try to shut down these unapproved ports, they eventually have to figure out how to make them productive and secure -- a task that requires a serious investment in an orchestration or platform layer. The dynamic defensive: MassMutual's refusal to bet While some enterprise companies seek an "AI operating system" that oversees all of their AI technologies and apps, others are simply refusing to sign the check. Sears Merritt, CIO and head of enterprise technology at MassMutual, is managing the governance conundrum by intentionally staying in a state of high-velocity flexibility. "Things are so dynamic, it's hard to know which of the AI vendors will end up on top," Merritt said at the Boston event. For that reason, MassMutual is refusing to enter any long-term contracts with AI vendors. Merritt's strategy of "dynamic defensive" highlights a core finding of our research: Vendor popularity is changing radically month to month. Anthropic, for example, went from 0% in January to nearly 6% in February, in the number of respondents reporting what agent orchestration technology they were using. Again, the sample size was small, at 70 respondents. Still, even if directional, the dynamic landscape suggests picking a "primary" winner today is a fool's errand. The January figure likely reflects survey composition: Respondents represent the broader enterprise market, not the developer community where Anthropic has seen its strongest early traction. Until recently, most organizations had signed up early with leaders like Microsoft and OpenAI as their main orchestration providers, due to their early lead with Copilot. Our finding that Anthropic is just now pushing into enterprise agent orchestration may be a confirmation of the recent excitement around that platform. One possible explanation is that enterprises already using Claude for model inference are now routing through Anthropic's native tooling rather than third-party frameworks -- though the sample is too small to draw firm conclusions. The rise of "platform creep" The leading providers are also shifting toward "managed agents," as reflected by Anthropic's recent announcement. This offering suggests possible continued platform creep, whereby providers like OpenAI and Anthropic take over more and more of the AI infrastructure -- most specifically, in this case, the memory of agentic session details. And there the trap is set. Once your session data and orchestration live inside a provider's proprietary database, you aren't just using a model; you are living in its ecosystem. Moreover, persistent agent memory is a prime target for memory poisoning via injected instructions that influence every future interaction. And when that memory lives in a provider's database, you lose your own forensic capability. The security irony: The fox guarding the hen house We are seeing this platform creep in our data as well. The most jarring finding in our Q1 data is what we call the "Security Irony": the fact that the providers most responsible for creating enterprise AI risk are the same ones enterprises are using to manage it. Respondents said the top selection criterion for AI orchestration platforms was "security and permissions generally" (37.1%), beating out other criteria like cost, flexibility, control and ease of development. Yet, the market is choosing convenience over sovereignty. According to our survey, 26% of enterprises in February were using OpenAI as their primary security solution -- the very same provider whose models create the risks they are trying to secure. That trend only seemed to strengthen in March, though, as stated before, we want to be careful. Our sample size is small, and this data should only be taken as directional. It's not clear whether enterprises are choosing OpenAI as a security solution, or just relying on its built-in security features offered by Microsoft Azure (which partnered with OpenAI when it pushed its Copilot solution aggressively in 2024) because customers were already on that platform. Beyond the data, there are anecdotal signs that OpenAI's enterprise position may be shifting. Anthropic's Claude Code drew significant attention among developers early this year alongside the Claude 4.6 model. The subsequent announcement of Mythos, its security-focused model, prompted interest from enterprise security teams given its ability to identify vulnerabilities. OpenAI has also announced a security-focused model, GPT-5.4-Cyber. Our data may also point to a drop in OpenAI's relative position in a few enterprise AI categories. One area was data-retrieval, where OpenAI again leads among third-party providers, but we saw an increase in the number of respondents instead using in-house solutions for retrieval -- perhaps a sign that AI models and agents are getting better at natively being able to use tools to call directly to companies' existing databases, and that custom code is often a way companies are building this in. However, here again we feel our data is at best directional for now. We are asking the fox to guard the hen house. Hyperscaler security features (like those from OpenAI, Azure, and Google) are winning, because they are already integrated into the platforms enterprises are using. But it creates a single-provider dependency. As agents gain the power to modify documents, call APIs and access databases, the "governance mirage" suggests we have control, while the data shows we are simply clicking "I agree" on whatever the hyperscalers offer. The resulting risks, however, include content injection, privilege escalation and data exfiltration. The path forward: toward a unified control plane The search for the "Dynatrace for AI" So, what is the way out? Sriraman argued that the industry desperately needs a "central observability platform" -- a "Dynatrace for AI" -- that provides full end-to-end visibility, including model drift and safety prompting, agent behavior analytics, privilege escalation alerts, and forensic logging. He is currently working with a number of potential providers to deliver on this. The "swivel chair" warning Sriraman warned that without a unified control plane, enterprises are at risk of sliding back into a fragmented "swivel chair" world -- reminiscent of the early, inefficient days of Robotic Process Automation (RPA) -- where employees are forced to constantly jump between different siloed AI tools to finish a single workflow. "We don't want to create a world where you have to switch to do something here and then go back to the platform to do something else," he said. But that desire for a single control plane conflicts with the desire to avoid lock-in. Our data shows the market has settled on the "hybrid control plane." In other words, the most popular situation among our respondents (at 34.3%), was to use model provider-native solutions like Copilot Studio or OpenAI assistants for some workflows, while also running external options like LangGraph or custom orchestration for others. Smaller numbers of companies reported being more dogmatic here, whether that be deliberately removing the model provider from the orchestration layer entirely, relying only on custom orchestration tools, or relying only on the model provider's technology Enterprises trust no single provider enough to give them full control, yet they lack the engineering capacity to build entirely from scratch. The bottom line: The "big red button" Visibility and integration are only half the battle. In a high-stakes industry like healthcare, Sriraman argues that any legitimate control plane must also offer a hard-stop capability. "We need a big red button," he said. "Kill it. We should be able to have that ... without that, don't put anything in the operational setting." In fact, such a kill switch was formally called for by the security community group OWASP as part of a recommended security framework. The "governance mirage" is the belief that you can scale AI without deciding who owns the control and security plane. If you are one of the 72% of organizations claiming multiple "primary" platforms, be careful because you may not have a strategy; you may have a conflict of interest. It suggests that the winner of the war between the AI behemoths -- OpenAI, Anthropic, Google, Microsoft, etc. -- won't necessarily be the one with the best model, but the one that manages to sit above the models and help enterprises enforce a single version of the truth. That may be difficult to achieve, though, given that companies won't want lock-in with a single player. The data suggests enterprises are already resisting that outcome -- and may need to formalize that resistance. Enterprises arguably need to own their control plane with independent security instrumentation, not wait for a vendor to win that role for them.
[4]
Why enterprises need governance frameworks for agentic AI
Enterprise productivity tools are entering a new phase. Instead of simply automating predefined workflows, platforms like Microsoft's emerging Copilot Cowork concept promise something far more ambitious: AI agents capable of executing complex, multi-step tasks across tools such as Microsoft 365. These systems represent a shift from automation to delegation. Instead of defining every step of a process, employees describe an outcome and the agent determines how to achieve it -- sending emails, updating documents, adjusting permissions, or coordinating across applications. The promise is significant. But so are the risks. For enterprise security and governance teams, agentic AI raises a fundamental question: what happens when the system making operational decisions isn't a human or even a traditional piece of software, but an autonomous agent acting on a human's behalf? The "Check-In With My Human" Problem Many agent-based systems attempt to mitigate risk with a "human in the loop" approach. When the AI reaches a decision point, it pauses and prompts the user to approve the next step. In theory, this introduces oversight. In practice, it may introduce very little. The "check-in-with-my-human" model is often a UX compromise disguised as a safety feature. Employees who delegated workflow to an AI agent did so because they were already overloaded. When the system interrupts them with approval prompts, the likely outcome isn't careful review -- it's a quick rubber stamp. We've seen this behavior before. Most users click through cookie consent banners without reading them. The same dynamic will apply to AI check-ins. Meaningful oversight requires the reviewer to understand what the agent did, why it made a decision, and what the downstream consequences might be. That level of scrutiny directly conflicts with the reason the employee delegated the task in the first place. For low-stakes activities, this approach may be sufficient. But the first time an agent executes an irreversible action that no one actually reviewed, organizations will discover just how fragile this safety model is. When AI Actions Blur Accountability Agentic AI also challenges one of the core assumptions of enterprise governance frameworks: that actions in a system are clearly attributable to a human user. Tools like Copilot Cowork blur that line and create a major accountability gap. When an AI agent sends an email or modifies SharePoint permissions, it is no longer clear whether the employee, the AI, or the productivity platform is responsible for making that change. Most governance frameworks weren't built for a world where software makes on-the-fly judgment calls autonomously. Audit trails today assume a direct link between a user identity and an action taken within the system. When an AI agent is acting autonomously on behalf of a user, that relationship becomes murky. To manage this risk, organizations should treat enterprise AI agents less like software features and more like digital employees. That means giving them: - Their own identities - Explicitly scoped permissions - Independent logging and monitoring - Clear audit trails Without these controls, compliance investigations will quickly become difficult -- or impossible -- to reconstruct. Agentic AI vs. Traditional Automation Part of the challenge comes from how fundamentally different agentic AI is from traditional automation. Tools like Power Automate or Zapier operate using deterministic workflows. Engineers define each step of a process and the logic connecting them. When triggered, the automation executes those steps exactly the same way every time. This model is predictable and auditable. Agentic AI flips that model entirely. Instead of scripting every action, users describe the outcome they want. The AI determines the path dynamically, making decisions along the way based on context. That opens the door to automating work that previously couldn't be automated -- tasks that are messy, ambiguous, or dependent on situational judgment. But it also introduces variability and unpredictability. Two executions of the same request may take different paths depending on context. Organizations shouldn't rush to replace their existing automation pipelines with agentic systems. Traditional automation still excels at repeatable, deterministic tasks. The better approach is to apply agentic AI to workflows that were never practical to automate in the first place. Where Enterprises Can Use Agentic AI Today Despite the risks, agentic productivity tools are genuinely exciting. Used thoughtfully, they can reduce friction across knowledge work and free employees from administrative overhead. Today, the safest applications tend to be tasks that are low risk but time consuming, such as: - Preparing meeting briefings - Summarizing project updates across teams - Drafting routine follow-up communications - Aggregating information from multiple workstreams These are tasks that often go half-done -- or undone entirely -- because employees simply run out of time. AI agents can fill those gaps effectively. However, organizations should resist the temptation to push agentic systems into high-consequence workflows too quickly. Until the platforms can deliver real observability, enforceable governance, and reliable rollback, organizations need to draw a hard line. And until that happens, there are certain domains that should be off-limits to agentic AI: * Anything touching compliance or audit obligations * Regulatory reporting and filing workflows * Financial approvals, transactions, or budget authority * HR and personnel decisions -- hiring, terminations, disciplinary actions * Access controls, permissions, and data governance If your AI agent can approve a wire transfer or modify access controls without a human being in the loop, you've essentially created an unaudited decision-maker with admin privileges. The Guardrails Haven't Caught Up Yet Agentic AI's potential is enormous. But right now, most organizations are focused on what these tools can do, not how they should be managed. And it's not like we haven't seen this movie before. Every major tech wave of the past three decades (web apps, BYOD, cloud, scripted bots/automation) has followed the same arc: rapid adoption, delayed governance, then painful correction. But the difference with agentic AI is that those were all deterministic tools. Then tools did what they were told. Agentic AI doesn't follow those rules. Tools like Copilot Cowork interpret, decide, and act. Two identical prompts can produce two different outcomes that touch email, permissions, and workflows before a single human reviews them. Combine that with the fastest enterprise adoption curve we've ever seen (driven by Microsoft embedding these capabilities directly into tools people already use) and the blast radius is significantly larger in this case. As agent-based workflows scale, the conversation must shift hard toward observability, accountability, and governance. Enterprises that treat AI agents like trusted employees, with identity, permissions, and auditability, will be far better positioned than those that treat them as just another productivity feature. The gains to productivity alone mean tools like Copilot Cowork are here to stay. The smart organizations won't wait for something to break before they figure out how to govern them.
[5]
Adversaries hijacked AI security tools at 90+ organizations. The next wave has write access to the firewall
Adversaries injected malicious prompts into legitimate AI tools at more than 90 organizations in 2025, stealing credentials and cryptocurrency. Every one of those compromised tools could read data, and none of them could rewrite a firewall rule. The autonomous SOC agents shipping now can. That escalation, from compromised tools that read data to autonomous agents that rewrite infrastructure, has not been exploited in production at scale yet. But the architectural conditions for it are shipping faster than the governance designed to prevent it. A compromised SOC agent can rewrite your firewall rules, modify IAM policies, and quarantine endpoints, all with its own privileged credentials, all through approved API calls that EDR classifies as authorized activity. The adversary never touches the network. The agent does it for them. Cisco announced AgenticOps for Security in February, with autonomous firewall remediation and PCI-DSS compliance capabilities. Ivanti launched Continuous Compliance and the Neurons AI self-service agent last week, with policy enforcement, approval gates and data context validation built into the platform at launch -- a design distinction that matters because the OWASP Agentic Top 10 documents what happens when those controls are absent. "Adversaries exploited legitimate AI tools by injecting malicious prompts that generated unauthorized commands. As innovation accelerates, exploitation follows," CrowdStrike CEO George Kurtz said when releasing the 2026 Global Threat Report. "AI is compressing the time between intent and execution while turning enterprise AI systems into targets," added Adam Meyers, head of counter-adversary operations at CrowdStrike. State-sponsored use of AI in offensive operations surged 89% over the prior year. The broader attack surface is expanding in parallel. Malicious MCP server clones have already intercepted sensitive data in AI workflows by impersonating trusted services. The U.K. National Cyber Security Centre warned that prompt injection attacks against AI applications "may never be totally mitigated." The documented compromises targeted AI tools that could only read and summarize; the autonomous SOC agents shipping now can write, enforce, and remediate. The governance framework that maps the gap OWASP's Top 10 for Agentic Applications, released in December 2025 and built with more than 100 security researchers, documents 10 categories of attack against autonomous AI systems. Three categories map directly to what autonomous SOC agents introduce when they ship with write access: Agent Goal Hijacking (ASI01), Tool Misuse (ASI02), and Identity and Privilege Abuse (ASI03). Palo Alto Networks reported an 82:1 machine-to-human identity ratio in the average enterprise -- every autonomous agent added to production extends that gap. The 2026 CISO AI Risk Report from Saviynt and Cybersecurity Insiders (n=235 CISOs) found 47% had already observed AI agents exhibiting unintended behavior, and only 5% felt confident they could contain a compromised agent. A separate Dark Reading poll found that 48% of cybersecurity professionals identify agentic AI as the single most dangerous attack vector. The IEEE-USA submission to NIST stated the problem plainly: "Risk is driven less by the models and is based more on the model's level of autonomy, privilege scope, and the environment of the agent being operationalized." Eleanor Watson, Senior IEEE Member, warned in the IEEE 2026 survey that "semi-autonomous systems can also drift from intended objectives, requiring oversight and regular audits." Cisco's intent-aware agentic inspection, announced alongside AgenticOps in February 2026, represents an early detection-layer approach to the same gap. The approaches differ: Cisco is adding inspection at the network layer while Ivanti built governance into the platform layer. Both signal the industry sees it coming. The question is whether the controls arrive before the exploits do. Autonomous agents that ship with governance built in Security teams are already stretched. Advanced AI models are accelerating the discovery of exploitable vulnerabilities faster than any human team can remediate manually, and the backlog is growing not because teams are failing, but because the volume now exceeds what manual patching cycles can absorb. Ivanti Neurons for Patch Management introduced Continuous Compliance this quarter, an automated enforcement framework that eliminates the gap between scheduled patch deployments and regulatory requirements. The framework identifies out-of-compliance endpoints and deploys patches out-of-band to update devices that missed maintenance windows, with built-in policy enforcement and compliance verification at every step. Ivanti also launched the Neurons AI self-service agent for ITSM, which moves beyond conversational intake to autonomous resolution with built-in guardrails for policy, approvals, and data context. The agent resolves common incidents and service requests from start to finish, reducing manual effort and deflecting tickets. Robert Hanson, Chief Information Officer at Grand Bank, described the decision calculus security leaders across the industry are weighing: "Before exploring the Ivanti Neurons AI self-service agent, our team was spending the bulk of our time handling repetitive requests. As we move toward implementing these capabilities, we expect to automate routine tasks and enable our team to focus more proactively on higher-value initiatives. Over time, this approach should help us reduce operational overhead while delivering faster, more secure service within the guardrails we define, ultimately supporting improvements in service quality and security." His emphasis on operating "within the guardrails we define" points to a broader design principle: speed and governance do not have to be trade-offs. The governance gap is concrete: the Saviynt report found 86% of organizations do not enforce access policies for AI identities, only 19% govern even half of their AI identities with the same controls applied to human users, and 75% of CISOs have discovered unsanctioned AI tools running in production with embedded credentials that nobody monitors. Continuous Compliance and the Neurons AI self-service agent address the patching and ITSM layers. The broader autonomous SOC agent terrain, including firewall remediation, IAM policy modification, and endpoint quarantine, extends beyond what any single platform governs today. The ten-question audit applies to every autonomous tool in the environment, including Ivanti's. Prescriptive risk matrix for autonomous agent governance The matrix maps all 10 OWASP Agentic Top 10 risk categories to what ships without governance, the detection gap, the proof case, and the recommended action for autonomous SOC agent deployments. The 10-question OWASP audit for autonomous agents Each question maps to one OWASP Agentic Top 10 risk category. Autonomous platforms that ship with policy enforcement, approval gates, and data context validation will have clear answers to every question. Three or more "I don't know" answers on any tool means that tool's governance has not kept pace with its capabilities. What the board needs to hear The board conversation is three sentences. Adversaries compromised AI tools at more than 90 organizations in 2025, according to CrowdStrike's 2026 Global Threat Report. The autonomous tools deploying now have more privilege than the ones that were compromised. The organization has audited every autonomous tool against OWASP's 10 risk categories and confirmed that the governance controls are in place. If that third sentence is not true, it needs to be true before the next autonomous agent ships to production. Run the 10-question audit against every agent with write access to production infrastructure within the next 30 days. Every autonomous platform shipping to production should be held to the same standard -- policy enforcement, approval gates, and data context validation built in at launch, not retrofitted after the first incident. The audit surfaces which tools have done that work and which have not.
[6]
Most Companies Are Scaling AI Faster Than They Can Control It. Here's Why That's A Problem
AI doesn't create new problems, it exposes existing governance gaps at unprecedented speed and scale Back in 2013, Target made headlines globally when a cyberattack exposed the payment card information of 40 million of its customers, along with the personal data of 70 million others. At the time, the breach was widely described as a cybersecurity failure, but it was more than that. It was also, by and large, a governance problem, one that mirrors what we're seeing today as organizations look to scale through AI. With no federal framework in place to guide how AI is governed in practice, organizations are defining their own guardrails to support responsible implementation and build trust. But the absence of regulation doesn't mean the absence of risk. Organizations deploying AI today are still operating within existing legal structures that govern areas like data privacy, consumer protection, and employment practices, to name a few. If an AI-assisted decision exposes personal data or introduces a material error, the organization remains accountable. AI governance can't afford to wait for regulation to catch up. The Target breach and the years that followed marked a watershed period that elevated cybersecurity to a board-level risk. During that time, I was brought in to lead information security for an operator of critical internet infrastructure. Like many in that moment, I was forced to examine where governance hadn't kept pace with operations. As someone who's spent her entire career in technology, I've come to know one constant. Technology moves, and governance rarely keeps up until it has to. Enterprise resource planning, or ERP, implementations, for example, have been widely adopted for decades and rarely fail because of the technology itself. The challenge is getting an organization to align on a single version of the truth across data, processes, and systems. AI is that same forcing function, one generation later. Organizations that haven't resolved those underlying issues are about to encounter them again with AI adoption, but at a much higher speed. Here are three considerations every organization should consider before deploying AI at scale. One of the greatest challenges in governance isn't access to information; it's a lack of shared understanding of its impact. Over the course of my career, I've learned to translate information across legal, security and operations, and have experienced how differently each function interprets risk. A technical risk assessment may resonate clearly within a security team for example, but it doesn't always translate effectively in a boardroom or in an operational review. In the months following the Target breach, the risks associated with third-party vendor access weren't broadly understood at the executive level. Making the case for investing in the right security protocols to manage that risk required translating a technical issue into business terms that leaders could evaluate and act on. That same dynamic is playing out with AI. According to IBM's 2025 CEO Study, 61 percent of CEOs say they aren't fully prepared to manage the complexity they face. The challenge isn't awareness, it's alignment. Different parts of the organization understand different pieces of the risk, but often no one is translating how those risks connect. Effective governance depends on that translation. When it's missing, risks are more likely to be acknowledged than acted on, and governance becomes something the organization observes rather than something it actively practices. Not long ago, I served as a data protection officer, personally accountable for the organization's data protection posture. That kind of accountability changes the questions you ask, the risks you surface, and the decisions you're willing to stand behind. In that role, I learned that monitoring tells you what a system is doing, but responsible oversight is the organizational ability to understand it, evaluate it, and change it when necessary. Many organizations are still trying to move AI from pilot to production. Far fewer have established clear ownership over who is accountable for how those systems behave. According to McKinsey's 2025 State of AI report, while most organizations are investing in AI, clear ownership and governance structures are still developing. Every organization implementing AI should be able to answer who is accountable for how each system behaves. If that answer isn't clear, the governance structure isn't complete. Over the course of my career, I've led teams with a wide range of technical abilities, but what consistently sets the strongest ones apart is their level of curiosity combined with their ability to think critically. In the context of AI, preventing flawed or biased data from influencing outcomes begins at the point of data collection, in the decisions about what to collect, what to measure and what to count. Curiosity, combined with the confidence to question those decisions when something seems off, is often what allows organizations to identify and close governance gaps before they scale into larger issues. Having worked in highly-regulated environments, I'm acutely aware that governance frameworks provide structure, but they only work when they're supported by behaviors that reinforce them. Human curiosity remains one of the most powerful assets a strong governance system has, and it should never be underestimated. The lesson from 2013 wasn't simply about a breach; it was about visibility. Target had contracts, relationships and controls in place, but its governance model hadn't kept pace with how the business actually operated. For those of us who've spent our careers in technology, this pattern is familiar. Technology rarely fails. What it reveals are the inconsistencies, assumptions, and governance gaps that were already there. The real question isn't whether your AI works, it's whether your organization is prepared for what it exposes.
[7]
How to Manage AI Agent Sprawl: A Six-Step Framework by Gartner
Gartner, Inc. has identified six steps to help organizations reduce the risks of AI agent sprawl. Gartner predicts that by 2028, an average global Fortune 500 enterprise will have over 150,000 agents in use, up from less than 15 in 2025, generating significant agent sprawl, IT complexity and management challenges. "As CIOs and IT leaders see an explosion of AI agents across their organizations, many are contending with an ungoverned sprawl of agents that expose their organizations to a range of risks, including misinformation, oversharing and data loss," said Max Goss, Sr. Director Analyst at Gartner. "Many organizations resort to blocking or restricting the use of AI agents, but this is not a long-term solution. If employees are unable to work in the sanctioned tools, they will likely go around the organization's controls and start using shadow AI which presents far greater risks. Organizations need to find a balance where they can govern agents and manage sprawl, but also safely empower employees to innovate with these tools." Gartner identified six steps to help CIOs and IT leaders establish governance and guardrails to reduce the risks of agent sprawl. * Establish agent governance and policies: Set clear rules for when and how agents are built, who can create and share them, and what connectors are permitted. * Build centralized agent inventory: Organizations can use AI trust, risk, and security management (AI TRiSM) tools to help discover and categorize agents across applications, both from sanctioned tools, and from shadow AI solutions. Once organizations have an agent inventory, they can start to build adaptive controls to enforce the right policies based on the level of risk the agent presents. * Define agent identity, permissions and life cycle model: Manage the agent identity, permission model and access controls, review, and retire redundant agents to prevent uncontrolled sprawl. * Develop AI information governance: Govern what information the AI tool or agent has access to and ensure that there is a process in place to keep the data current, manage its permissions to prevent oversharing, and archive the data when it is obsolete. * Monitor and remediate agent behavior: Establish ongoing visibility into agent usage, ensure policy compliance, detect anomalous behavior, and correct agents that exceed their intended scope or risk tolerance.
Share
Copy Link
A critical governance gap is emerging as enterprises rush to deploy agentic AI. While 74% of companies plan to implement autonomous AI agents within two years, only 21% report having mature AI governance frameworks in place. Meanwhile, 77% of IT managers admit they lack complete control over agents already operating in their organizations, creating significant AI security risks.
A troubling pattern is emerging across the enterprise landscape: autonomous AI agents are proliferating faster than organizations can govern them. According to the Deloitte AI Institute 2026 State of AI report, nearly 74% of companies plan to deploy agentic AI within two years
1
. Yet only 21% report having a mature model for AI agent governance, exposing a dangerous disconnect between ambition and preparedness. A separate survey by Rubrik ZeroLabs reveals that just 23% of IT managers say they have complete control over the agents within their organizations2
. The remaining 77% are essentially operating in the dark, unable to track what their agents are doing, on whose behalf, or under what policies.
Source: Entrepreneur
The AI governance gap extends beyond mere oversight challenges. VentureBeat research found that 72% of enterprises claim to have two or more AI platforms they identify as their "primary" layer, reflecting a state of sprawl that has emerged as major software providers rush to offer their own AI to enterprise customers . These multiple platforms from vendors like Microsoft Azure, Google, OpenAI, Anthropic, Epic, Workday, and ServiceNow extend the attack surface of most enterprises at a time when AI-driven attacks have become increasingly potent. What's needed is a robust AI control plane that governs, observes, and secures how AI agents, along with their tools and models, operate across the enterprise.
The ease of creating AI agents has become a double-edged sword for enterprises. Users often turn off VPNs or skirt security controls to spin up agents as assistants, resulting in a large volume of unsanctioned AI applications
2
. Kriti Faujdar, senior product manager at Microsoft, warns that "we are already seeing patterns similar to early cloud adoption, where teams spin up agents independently using different frameworks and vendors. This leads to fragmentation, inconsistent governance, and hidden security gaps." The problem is accelerating: 86% of IT managers anticipate that agentic proliferation will outpace security guardrails in the next year, with 52% expecting this to happen within the next six months2
.Agent management strategies remain woefully inadequate. A majority of IT managers, 81%, report that the agents under their purview require more time in manual auditing and monitoring than they were intended to save via workflow improvements
2
. Nearly all respondents indicate they lack the "undo" capabilities necessary to roll back unintended agent actions. Nik Kale, principal engineer with the Coalition for Secure AI, notes that "any team with API access can spin up an agent in an afternoon. Multiply that across a large enterprise, and you get hundreds of agents with overlapping permissions, no consistent identity model, and no one who can tell you the full inventory."Without a true control plane, enterprises lack the ability to scale agents autonomously and instead have unmanaged execution with significant risk
1
. Andrew Rafla, principal at Deloitte Cyber Practice, defines a control plane as "the shared, centralized layer governing who can run which agents, with which permissions, under which policies, and using which models and tools." Organizations must be able to answer what an agent did, on whose behalf, using what data, under what policy, and whether they can reproduce or stop it. Without these answers, administrators cannot define acceptable agentic behavior, audit what resources and tools agents can access, create policies for triggering a human-in-the-loop, or roll back agentic actions2
.
Source: MIT Tech Review
The accountability challenge becomes even more complex as AI actions blur traditional attribution. When an AI agent sends an email or modifies SharePoint permissions, it's no longer clear whether the employee, the AI, or the productivity platform is responsible
4
. Most governance frameworks weren't built for a world where software makes on-the-fly judgment calls autonomously. Audit trails today assume a direct link between a user identity and an action taken within the system, but when an AI agent acts autonomously on behalf of a user, that relationship becomes murky. Organizations should treat enterprise AI agents less like software features and more like digital employees, giving them their own identities, explicitly scoped permissions, independent logging and monitoring, and clear audit trails.The AI security risks are evolving from theoretical concerns to active threats. Adversaries injected malicious prompts into legitimate AI tools at more than 90 organizations in 2025, stealing credentials and cryptocurrency . Every one of those compromised tools could read data, but none could rewrite a firewall rule. The autonomous SOC agents shipping now can. A compromised SOC agent can rewrite firewall rules, modify IAM policies, and quarantine endpoints, all with its own privileged credentials, all through approved API calls that EDR classifies as authorized activity. The adversary never touches the network; the agent does it for them.

Source: VentureBeat
Executives are most concerned with data privacy and security at 73%, followed by legal, intellectual property, and regulatory compliance at 50%, and governance capabilities and oversight at 46%
1
. The 2026 CISO AI Risk Report from Saviynt and Cybersecurity Insiders found that 47% had already observed AI agents exhibiting unintended behavior, and only 5% felt confident they could contain a compromised agent . Vulnerabilities like prompt injection attacks against AI applications "may never be totally mitigated," according to the U.K. National Cyber Security Centre. Palo Alto Networks reported an 82:1 machine-to-human identity ratio in the average enterprise, with non-human identities outpacing human identities, a trend that will explode with agentic AI.Related Stories
The VentureBeat research reveals what many are calling a "governance mirage": while 56% of respondents said they are "very confident" they'd detect a misbehaving AI model, nearly a third have no systematic mechanism to detect AI misbehavior until it surfaces through users or audits . In a world where telemetry leakage accounts for 34% of GenAI incidents and the global average breach cost has hit $4.4 million, finding out after the damage is done is the default for too many companies. Governance must make answers obvious, not aspirational, turning AI pilots into production use cases and serving as the bridge that lets companies move from impressive experiments to safe, repeatable, enterprise-wide automation
1
.The challenge is compounded by the reality that enterprises may not even realize they are treating agents within their environment as first-class citizens with the keys to the kingdom, creating looming blind spots and potential points of exposure
1
. Mass General Brigham hospital system, with 90,000 employees, had to shut down an uncontrolled number of internal proof of concepts that had sprouted up as employees got carried away with AI projects . The organization decided to wait for software giants to deliver on their AI roadmaps, but even then had to build a "skin" around Microsoft Copilot to handle safety and data privacy concerns, preventing protected health information from leaking back to OpenAI. They're now investing in building a control plane that coordinates and orchestrates all of these agents from different vendors.Renze Jongman, founder and CEO of Liberty91, highlights another critical challenge: "The agent you certified in Q1 is behaviorally different by Q3, through no fault of the platform. Your governance model has to assume the ground moves"
2
. This model drift means that AI governance frameworks must be dynamic rather than static. Kale advises keeping the orchestration layer in the agent stack separate from the model and governance layers, warning that "if all three live inside one vendor's platform, you've handed over your agent's brain, its permissions, and its accountability chain in a single contract."Agentic observability remains notoriously challenging, with a growing need for telemetry to understand chains of agentic actions, punctuated by enforcement points for security
2
. Without governance, agent deployments don't fail safely; they fail unpredictably and at scale1
. The winners will be those who treat agent management not as an afterthought, but as a first-class discipline, with oversight involving security, architecture, and the business unit that owns the outcomes2
.Summarized by
Navi
[1]
[3]
10 Mar 2026•Policy and Regulation

12 Feb 2026•Technology

02 May 2026•Technology

1
Technology

2
Technology

3
Policy and Regulation
