2 Sources
2 Sources
[1]
Microsoft says your AI agent can become a double agent
New security research flags misused permissions and poisoned memory, pushing companies to lock down agent access. Microsoft is warning that the rush to deploy workplace AI agents can create a new kind of insider threat, the AI double agent. In its Cyber Pulse report, it says attackers can twist an assistant's access or feed it untrusted input, then use that reach to cause damage inside an organization. The problem isn't that AI is new. It's that control is uneven. Microsoft says agents are spreading across industries, while some deployments slip past IT review and security teams lose sight of what is running and what it can touch. Recommended Videos That blind spot gets riskier when an agent can remember and act. Microsoft points to a recent fraudulent campaign its Defender team investigated that used memory poisoning to tamper with an AI assistant's stored context and steer future outputs. Shadow agents widen the blast radius Microsoft ties the double agent risk to speed. When rollouts outpace security and compliance, shadow AI shows up fast, and attackers get more chances to hijack a tool that already has legitimate access. That's the nightmare scenario. The report frames it as an access problem as much as an AI problem. Give an agent broad privileges, and a single tricked workflow can reach data and systems it was never meant to touch. Microsoft pushes observability and centralized management so security teams can see every agent tied into work, including tools that appear outside approved channels. The sprawl is already happening. Microsoft cites survey work finding 29% of employees have used unapproved AI agents for work tasks, the kind of quiet expansion that makes tampering harder to spot early. It's not just bad prompts This isn't limited to someone typing the wrong request. Microsoft highlights memory poisoning as a persistent attack, one that can plant changes that influence later responses and erode trust over time. Its AI Red Team also saw agents get tricked by deceptive interface elements, including harmful instructions hidden in everyday content, plus task framing that subtly redirects reasoning. It can look normal. That's the point. What to do next Microsoft's advice is to treat AI agents like a new class of digital identity, not a simple add-on. The report recommends a Zero Trust posture for agents, verify identity, keep permissions tight, and monitor behavior continuously so unusual actions stand out. Centralized management matters for the same reason. If security teams can inventory agents, understand what they can reach, and enforce consistent controls, the double agent problem gets smaller. Before you deploy more agents, map what each one can access, apply least privilege, and set monitoring that can flag instruction tampering. If you can't answer those basics yet, slow down and fix that first.
[2]
Microsoft Says AI Tools With Too Many Privileges Can Become 'Double Agents'
Microsoft has highlighted several risks with artificial intelligence (AI) agents in its latest security report. The most interesting insight is about "AI double agents," which are basically agents with excessive privileges but not enough safeguards. This makes them vulnerable to prompt engineering attacks by bad actors, and turns them into "double agents." With these tools becoming increasingly popular in the enterprise space, the cybersecurity report highlights the security gaps that businesses must address to protect their sensitive data. Microsoft Highlights Risks With AI Double Agents The Redmond-based tech giant published findings from its first-party telemetry and research in the latest Cyber Pulse Report. This report focuses on the rise in adoption of AI agents and the security risks that emerge from that. "Recent Microsoft data indicates that these human-agent teams are growing and becoming widely adopted globally," the company said in a blog post. Adding to this, the report claims that more than 80 percent of the Fortune 500 companies are currently deploying AI agents built with low-code or no-code tools. Microsoft says this is a concerning trend as agents built using vibe coding will lack the fundamental security protocols required for an enterprise environment. In the report, the tech giant mentions that AI agents require protection by increasing observability, governance, and Zero Trust principles-based security measures. Zero trust is essentially a security framework which is built on the principle of "never trust, always verify," assuming no user or device, inside or outside the network, is trustworthy by default. One interesting trend the report mentions is the concept of AI double agents. Microsoft says the AI agents being developed by companies today have excessive privileges, which poses a security threat. "Bad actors might exploit agents' access and privileges, turning them into unintended 'double agents.' Like human employees, an agent with too much access -- or the wrong instructions -- can become a vulnerability, the post added. Explaining the risk, the tech giant said that researchers have documented how agents can be misled by deceptive interface elements, such as following harmful instructions added to regular content. Another risk discovered by researchers is redirecting agents via manipulated task framing. Citing a multinational survey of more than 1,700 data security professionals commissioned by Microsoft from Hypothesis Groups, the report claimed that 29 percent of employees are using AI agents for work tasks that are not sanctioned by IT teams. "This is the heart of a cyber risk dilemma. AI agents are bringing new opportunities to the workplace and are becoming woven into internal operations. But an agent's risky behaviour can amplify threats from within and create new failure modes for organisations unprepared to manage them," the report said.
Share
Share
Copy Link
Microsoft's latest Cyber Pulse report reveals a troubling security gap: workplace AI agents are being deployed faster than security teams can manage them. With excessive privileges and weak safeguards, these tools can be hijacked through prompt engineering attacks and memory poisoning, turning them into AI insider threats that compromise sensitive data.
Microsoft is sounding the alarm on a security vulnerability emerging from the rapid adoption of workplace AI agents. In its latest Microsoft Cyber Pulse report, the company warns that attackers can exploit AI agents with excessive privileges, transforming them into what it calls AI double agents that pose serious risks to organizational security
1
2
. The issue centers on control and visibility. As AI agents spread across industries, many deployments bypass IT review entirely, leaving security teams unable to track what's running or what data these tools can access. More than 80 percent of Fortune 500 companies are currently deploying AI agents built with low-code/no-code tools, a trend Microsoft finds concerning because these agents often lack fundamental security protocols required for enterprise environments2
.
Source: Gadgets 360
The blind spot grows more dangerous when employees operate outside approved channels. A multinational survey of more than 1,700 data security professionals commissioned by Microsoft from Hypothesis Groups found that 29 percent of employees have used unapproved AI agents for work tasks
1
2
. This shadow AI expansion makes tampering harder to detect early and gives attackers more opportunities to hijack tools that already have legitimate access. Microsoft ties the double agent risk directly to speed, noting that when rollouts outpace security and compliance reviews, the blast radius widens significantly.The threat extends beyond simple user error. Microsoft's Defender team recently investigated a fraudulent campaign that used memory poisoning to tamper with an AI assistant's stored context and steer future outputs
1
. This persistent attack method plants changes that influence later responses and erode trust over time. The company's AI Red Team also documented how agents can be misled by deceptive interface elements, including harmful instructions hidden in everyday content, plus manipulated task framing that subtly redirects reasoning1
2
. These prompt engineering attacks can look entirely normal, which is precisely what makes them effective.Related Stories
Microsoft frames the solution around treating AI agents as a new class of digital identity rather than simple add-ons. The report recommends applying a Zero Trust security model for agents, which operates on the principle of "never trust, always verify" and assumes no user or device is trustworthy by default
2
. This means verifying identity, keeping permissions tight through least privilege access, and monitoring behavior continuously so unusual actions stand out1
. The company stresses that AI tools with excessive privileges create vulnerabilities where a single tricked workflow can reach data and systems it was never meant to touch.Centralized management and observability matter because security teams need to inventory every agent tied into work systems, including tools that appear outside approved channels
1
. Microsoft emphasizes that increasing governance helps organizations understand what each agent can reach and enforce consistent controls across deployments2
. Before deploying more agents, companies should map access permissions, apply least privilege principles, and set monitoring that can flag instruction tampering. The report's core message is clear: if organizations can't answer these basics yet, they need to slow down and address security gaps first.Summarized by
Navi
[1]
18 Nov 2025•Technology

20 May 2025•Technology

11 Nov 2025•Technology

1
Technology

2
Technology

3
Science and Research
