AI Agents Expose Critical Security Flaws as ServiceNow and Microsoft Issue Urgent Warnings

3 Sources

Share

ServiceNow's BodySnatcher vulnerability and Microsoft's security guidance reveal how AI agents are creating an unprecedented security crisis. These autonomous systems enable lateral movement across networks and privilege escalation, exploiting gaps in traditional identity management that weren't designed for adaptive, goal-driven agents operating at machine speed.

AI Agents Create Unprecedented Security Vulnerabilities

AI agents are rapidly becoming every threat actor's ideal entry point into enterprise systems, exposing critical weaknesses in traditional access control frameworks. Recent security incidents involving ServiceNow and Microsoft have revealed how these autonomous agents enable lateral movement across corporate networks and privilege escalation that most cybersecurity professionals never anticipated

1

. Jonathan Wall, founder and CEO of Runloop, explains the core threat: "If, through that first agent, a malicious agent is able to connect to another agent with a [better] set of privileges to that resource, then he will have escalated his privileges through lateral movement and potentially gained unauthorized access to sensitive information"

1

.

Source: TechRadar

Source: TechRadar

The AI security crisis stems from a fundamental mismatch between how traditional IAM (Identity and Access Management) systems operate and how AI agents function. While conventional identity controls were designed for predictable, deterministic workflows, AI agents are goal-driven, adaptive, and capable of chaining actions across multiple systems

2

. This hybrid nature creates an identity gap that introduces real security and compliance risks alongside efficiency challenges.

ServiceNow's BodySnatcher Reveals Critical AI Agent Flaws

Earlier this month, AppOmni Labs disclosed a severe vulnerability in ServiceNow's platform dubbed "BodySnatcher," which Aaron Costello, chief of research at AppOmni Labs, described as "the most severe AI-driven vulnerability uncovered to date"

1

. The vulnerability allowed an unauthenticated attacker with only a target's email address to impersonate an administrator and execute an AI agent to override security controls and create backdoor accounts with full permissions

1

. This could grant nearly unlimited access to customer Social Security numbers, healthcare information, financial records, or confidential intellectual property.

Costello emphasized that this wasn't an isolated incident, noting it builds upon previous research into ServiceNow's agent-to-agent discovery mechanism, which detailed how threat actors can trick AI agents into recruiting more powerful AI agents to fulfill malicious tasks

1

. ServiceNow has since plugged these vulnerabilities before any customers were known to have been impacted, while Microsoft has issued guidance to customers on configuring its agentic AI management control plane for tighter agent security

1

.

Shadow AI Agents Proliferate Beyond Traditional Governance

Google's cybersecurity leaders identified shadow AI agents as a critical concern, predicting that by 2026, the proliferation of sophisticated autonomous agents will escalate the shadow AI problem into a critical challenge

1

. Employees will independently deploy these powerful agents for work tasks regardless of corporate approval, creating invisible, uncontrolled pipelines for sensitive data that potentially lead to data leaks, compliance violations, and intellectual property theft.

Source: ZDNet

Source: ZDNet

The speed at which shadow AI agents spread makes this challenge urgent. Enterprises that believe they have just a few AI agents often discover hundreds or thousands once they look closely

2

. Employees build custom GPTs, developers spin up MCP servers locally, and business units integrate AI tools directly into workflows. Security teams are left unable to answer basic questions about which agents exist, who owns them, what permissions they have, or what data they access

2

.

Context Drift and Inference Undermine Static Access Control

Traditional RBAC (Role-Based Access Control) models cannot keep pace with dynamic reasoning exhibited by AI agents

3

. Unlike conventional software that follows deterministic logic, AI agents act on intent, making autonomous decisions about what data or actions they need to achieve outcomes. A retail company's AI sales assistant restricted from accessing personally identifiable information could still correlate activity logs, support tickets, and purchase histories across multiple systems to identify specific users through inference

3

. While it didn't break access control, it reasoned its way around systems to access information beyond its original scope.

Context drift represents another critical vulnerability where permissions evolve through interaction as agents chain tasks, share outputs, and make assumptions based on others' results

3

. A marketing analytics agent summarizing user behavior might feed output to a financial forecasting agent, which uses it to predict regional revenue. Each agent only sees part of the process, but together they've built a complete, unintended picture of customer financial data. Every step followed policy, but the aggregate effect broke it.

AI Agent Identity Management Demands New Security Frameworks

AI agents are created quickly, modified frequently, and often abandoned silently, with access persisting and ownership disappearing

2

. Quarterly access reviews and periodic certifications cannot keep pace with identities that experience lifecycle dynamics compressed into minutes, hours, or days. From a Zero Trust perspective, an identity that cannot be seen cannot be governed, monitored, or audited

2

.

Cybersecurity professionals must adopt a least privilege posture for AI agents and shift from governing access to governing intent

1

3

. Security frameworks for agentic systems should include intent binding where every action carries the originating user's context throughout the execution chain, dynamic authorization that adapts to context at runtime, provenance tracking for verifiable records, human-in-the-loop oversight for high-risk actions, and contextual auditing that visualizes how queries evolve into actions across agents

3

. Effective discovery must be continuous and behavior-based, as quarterly scans prove insufficient when new agents can appear and disappear within minutes

2

. CISOs face mounting pressure to implement AI agent identity management as a new security control plane before these vulnerabilities enable widespread breaches across enterprise environments.

Source: BleepingComputer

Source: BleepingComputer

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo