77% of IT managers say their AI agents are out of control as governance gap widens

7 Sources

Share

A critical governance gap is emerging as enterprises rush to deploy agentic AI. While 74% of companies plan to implement autonomous AI agents within two years, only 21% report having mature AI governance frameworks in place. Meanwhile, 77% of IT managers admit they lack complete control over agents already operating in their organizations, creating significant AI security risks.

AI Agent Governance Crisis Unfolds Across Enterprises

A troubling pattern is emerging across the enterprise landscape: autonomous AI agents are proliferating faster than organizations can govern them. According to the Deloitte AI Institute 2026 State of AI report, nearly 74% of companies plan to deploy agentic AI within two years

1

. Yet only 21% report having a mature model for AI agent governance, exposing a dangerous disconnect between ambition and preparedness. A separate survey by Rubrik ZeroLabs reveals that just 23% of IT managers say they have complete control over the agents within their organizations

2

. The remaining 77% are essentially operating in the dark, unable to track what their agents are doing, on whose behalf, or under what policies.

Source: Entrepreneur

Source: Entrepreneur

The AI governance gap extends beyond mere oversight challenges. VentureBeat research found that 72% of enterprises claim to have two or more AI platforms they identify as their "primary" layer, reflecting a state of sprawl that has emerged as major software providers rush to offer their own AI to enterprise customers . These multiple platforms from vendors like Microsoft Azure, Google, OpenAI, Anthropic, Epic, Workday, and ServiceNow extend the attack surface of most enterprises at a time when AI-driven attacks have become increasingly potent. What's needed is a robust AI control plane that governs, observes, and secures how AI agents, along with their tools and models, operate across the enterprise.

Agent Sprawl Outpaces Security Guardrails

The ease of creating AI agents has become a double-edged sword for enterprises. Users often turn off VPNs or skirt security controls to spin up agents as assistants, resulting in a large volume of unsanctioned AI applications

2

. Kriti Faujdar, senior product manager at Microsoft, warns that "we are already seeing patterns similar to early cloud adoption, where teams spin up agents independently using different frameworks and vendors. This leads to fragmentation, inconsistent governance, and hidden security gaps." The problem is accelerating: 86% of IT managers anticipate that agentic proliferation will outpace security guardrails in the next year, with 52% expecting this to happen within the next six months

2

.

Agent management strategies remain woefully inadequate. A majority of IT managers, 81%, report that the agents under their purview require more time in manual auditing and monitoring than they were intended to save via workflow improvements

2

. Nearly all respondents indicate they lack the "undo" capabilities necessary to roll back unintended agent actions. Nik Kale, principal engineer with the Coalition for Secure AI, notes that "any team with API access can spin up an agent in an afternoon. Multiply that across a large enterprise, and you get hundreds of agents with overlapping permissions, no consistent identity model, and no one who can tell you the full inventory."

The Control Plane Imperative for Autonomous AI Agents

Without a true control plane, enterprises lack the ability to scale agents autonomously and instead have unmanaged execution with significant risk

1

. Andrew Rafla, principal at Deloitte Cyber Practice, defines a control plane as "the shared, centralized layer governing who can run which agents, with which permissions, under which policies, and using which models and tools." Organizations must be able to answer what an agent did, on whose behalf, using what data, under what policy, and whether they can reproduce or stop it. Without these answers, administrators cannot define acceptable agentic behavior, audit what resources and tools agents can access, create policies for triggering a human-in-the-loop, or roll back agentic actions

2

.

Source: MIT Tech Review

Source: MIT Tech Review

The accountability challenge becomes even more complex as AI actions blur traditional attribution. When an AI agent sends an email or modifies SharePoint permissions, it's no longer clear whether the employee, the AI, or the productivity platform is responsible

4

. Most governance frameworks weren't built for a world where software makes on-the-fly judgment calls autonomously. Audit trails today assume a direct link between a user identity and an action taken within the system, but when an AI agent acts autonomously on behalf of a user, that relationship becomes murky. Organizations should treat enterprise AI agents less like software features and more like digital employees, giving them their own identities, explicitly scoped permissions, independent logging and monitoring, and clear audit trails.

AI Security Risks Escalate with Write Access

The AI security risks are evolving from theoretical concerns to active threats. Adversaries injected malicious prompts into legitimate AI tools at more than 90 organizations in 2025, stealing credentials and cryptocurrency . Every one of those compromised tools could read data, but none could rewrite a firewall rule. The autonomous SOC agents shipping now can. A compromised SOC agent can rewrite firewall rules, modify IAM policies, and quarantine endpoints, all with its own privileged credentials, all through approved API calls that EDR classifies as authorized activity. The adversary never touches the network; the agent does it for them.

Source: VentureBeat

Source: VentureBeat

Executives are most concerned with data privacy and security at 73%, followed by legal, intellectual property, and regulatory compliance at 50%, and governance capabilities and oversight at 46%

1

. The 2026 CISO AI Risk Report from Saviynt and Cybersecurity Insiders found that 47% had already observed AI agents exhibiting unintended behavior, and only 5% felt confident they could contain a compromised agent . Vulnerabilities like prompt injection attacks against AI applications "may never be totally mitigated," according to the U.K. National Cyber Security Centre. Palo Alto Networks reported an 82:1 machine-to-human identity ratio in the average enterprise, with non-human identities outpacing human identities, a trend that will explode with agentic AI.

Building AI Governance Frameworks That Actually Work

The VentureBeat research reveals what many are calling a "governance mirage": while 56% of respondents said they are "very confident" they'd detect a misbehaving AI model, nearly a third have no systematic mechanism to detect AI misbehavior until it surfaces through users or audits . In a world where telemetry leakage accounts for 34% of GenAI incidents and the global average breach cost has hit $4.4 million, finding out after the damage is done is the default for too many companies. Governance must make answers obvious, not aspirational, turning AI pilots into production use cases and serving as the bridge that lets companies move from impressive experiments to safe, repeatable, enterprise-wide automation

1

.

The challenge is compounded by the reality that enterprises may not even realize they are treating agents within their environment as first-class citizens with the keys to the kingdom, creating looming blind spots and potential points of exposure

1

. Mass General Brigham hospital system, with 90,000 employees, had to shut down an uncontrolled number of internal proof of concepts that had sprouted up as employees got carried away with AI projects . The organization decided to wait for software giants to deliver on their AI roadmaps, but even then had to build a "skin" around Microsoft Copilot to handle safety and data privacy concerns, preventing protected health information from leaking back to OpenAI. They're now investing in building a control plane that coordinates and orchestrates all of these agents from different vendors.

The Path Forward: Orchestration and Observability

Renze Jongman, founder and CEO of Liberty91, highlights another critical challenge: "The agent you certified in Q1 is behaviorally different by Q3, through no fault of the platform. Your governance model has to assume the ground moves"

2

. This model drift means that AI governance frameworks must be dynamic rather than static. Kale advises keeping the orchestration layer in the agent stack separate from the model and governance layers, warning that "if all three live inside one vendor's platform, you've handed over your agent's brain, its permissions, and its accountability chain in a single contract."

Agentic observability remains notoriously challenging, with a growing need for telemetry to understand chains of agentic actions, punctuated by enforcement points for security

2

. Without governance, agent deployments don't fail safely; they fail unpredictably and at scale

1

. The winners will be those who treat agent management not as an afterthought, but as a first-class discipline, with oversight involving security, architecture, and the business unit that owns the outcomes

2

.

Today's Top Stories

TheOutpost.ai

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Instagram logo
LinkedIn logo
Youtube logo
© 2026 TheOutpost.AI All rights reserved