5 Sources
5 Sources
[1]
AI agents 2026's biggest insider threat: PANW security boss
interview AI agents represent the new insider threat to companies in 2026, according to Palo Alto Networks Chief Security Intel Officer Wendi Whitmore, and this poses several challenges to executives tasked with securing the expected surge in autonomous agents. "The CISO and security teams find themselves under a lot of pressure to deploy new technology as quickly as possible, and that creates this massive amount of pressure - and massive workload - that the teams are under to quickly go through procurement processes, security checks, and understand if the new AI applications are secure enough for the use cases that these organizations have," Whitmore told The Register. "And that's created this concept of the AI agent itself becoming the new insider threat," she added. According to Gartner's estimates, 40 percent of all enterprise applications will integrate with task-specific AI agents by the end of 2026, up from less than 5 percent in 2025. This surge presents a double-edged sword, Whitmore said in an interview and predictions report. On one hand, AI agents can help fill the ongoing cyber-skills gap that has plagued security teams for years, doing things like correcting buggy code, automating log scans and alert triage, and rapidly blocking security threats. When we look through the defender lens, a lot of what the agentic capabilities allow us to do is start thinking more strategically about how we defend our networks, versus always being caught in this reactive situation "When we look through the defender lens, a lot of what the agentic capabilities allow us to do is start thinking more strategically about how we defend our networks, versus always being caught in this reactive situation," Whitmore said. Whitmore told The Register she had recently spoken with one of Palo Alto Networks' internal security operations center (SOC) analysts who had built an AI-based program that indexed publicly known threats against the cybersecurity shop's own private threat-intel data, and analyzed the company's resilience, as well as which security issues were more likely to cause harm. This, she said, allows the firm to "focus our strategic policies over the next six months, the next yes, on what kinds of things do we need to be putting in place? What data sources do we need that we are not necessarily thinking of today?" The next step in using AI in the SOC involves categorizing alerts as actionable, auto-close, or auto-remediate. "We are in various stages of implementing these," Whitmore said. "When we look at agentic, we start with some of the more simple use cases first, and then progress as we become more confident in those from a response capability." However, these agents - depending on their configurations and permissions - may also have privileged access to sensitive data and systems. This makes agentic AI vulnerable - and a very attractive target to attack. One of the risks stems from the "superuser problem," Whitmore explained. This occurs when the autonomous agents are granted broad permissions, creating a "superuser" that can chain together access to sensitive applications and resources without security teams' knowledge or approval. "It becomes equally as important for us to make sure that we are only deploying the least amount of privileges needed to get a job done, just like we would do for humans," Whitmore said. "The second area is one we haven't seen in investigations yet," she continued. "But while we're on the predictions lens, I see this concept of a doppelganger." This involves using task-specific AI agents to approve transactions or review and sign off on contracts that would otherwise require C-suite level manual approvals. "We think about the people who are running the business, and they're oftentimes pulled in a million directions throughout the course of the day," Whitmore said. "So there's this concept of: We can make the CEO's job more efficient by creating these agents. But ultimately, as we give more power and authority and autonomy to these agents, we're going to then start getting into some real problems." For example: an agent could approve an unwanted wire transfer on behalf of the CEO. Or imagine a mergers and acquisitions scenario, with an attacker manipulating the models in such a way that forces an AI agent to act with malicious intent. By using a "single, well-crafted prompt injection or by exploiting a 'tool misuse' vulnerability," adversaries now "have an autonomous insider at their command, one that can silently execute trades, delete backups, or pivot to exfiltrate the entire customer database," according to Palo Alto Networks' 2026 predictions. This also illustrates the ongoing threat of prompt-injection attacks. This year, researchers have repeatedly shown prompt injection attacks to be a real problem, with no fix in sight. "It's probably going to get a lot worse before it gets better," Whitmore said, referring to prompt-injection. "Meaning, I just don't think we have these systems locked down enough." Some of this is intentional. "New systems, and the creators of these technologies, need people to be able to come up with creative attack use cases, and this often involves manipulating" the models, Whitmore said. "This means that we've got to have security baked in, and today we're ahead of our skis. The development and innovation within the AI models themselves is happening a lot faster than the incorporation of security, which is lagging behind." In 2025, Palo Alto Networks' Unit 42 incident response team saw attackers abuse AI in two ways. One: it allowed them to conduct traditional cyberattacks faster, and at scale. The second involved manipulating models and AI systems to conduct new types of attacks. "Historically, when an attacker gets initial access into an environment, they want to move laterally to a domain controller," Whitmore said. "They want to dump Active Directory credentials, they want to elevate privileges. We don't see that as much now. What we're seeing is them get access into an environment immediately, go straight to the internal LLM, and start querying the model for questions and answers, and then having it do all of the work on their behalf." Whitmore, along with just about every other cyber exec The Register has spoken with over the past couple of months, pointed to the "Anthropic attack" as an example. She's referring to the September digital break-ins at multiple high-profile companies and government organizations later documented by Anthropic. Chinese cyberspies used the company's Claude Code AI tool to automate intel-gathering attacks, and in some cases they succeeded. While Whitmore doesn't anticipate AI agents to carry out any fully autonomous attacks this year, she does expect AI to be a force multiplier for network intruders. "You're going to see these really small teams almost have the capability of big armies," she said. "They can now leverage AI capabilities to do so much more of the work that previously they would have had to have a much larger team to execute against." Whitmore likens the current AI boom to the cloud migration that happened two decades ago. "The biggest breaches that happened in cloud environments weren't because they were using the cloud, but because they were targeting insecure deployments of cloud configurations," she said. "We're really seeing a lot of identical indicators when it comes to AI adoption." For CISOs, this means establishing best practices when it comes to AI identities and provisioning agents and other AI-based systems with access controls that limit them to only data and applications that are needed to perform their specific tasks. "We need to provision them with least-possible access and have controls set up so that we can quickly detect if an agent does go rogue," Whitmore said. ®
[2]
3 defining trends for cybersecurity in 2026
Subscribe to the Daily newsletter.Fast Company's trending stories delivered to you every day In 2026, the mass personalization of cyberattacks will disrupt the classical kill chain model, which relies on observing and then reacting to stop threats. Attackers will leverage AI to understand business's unique vulnerabilities and craft personalized, novel software for each enterprise. This means every organization will see a massive rise in sophisticated, tailored attacks that are not known to the majority of their current security tools, pitting them in a race against time to spot the attack and respond before sustaining widespread damage. Adding AI to reactive tools will help, but will be woefully insufficient to counter this new onslaught. Instead, this shift will require security teams to develop wholly new approaches to preemptively mitigate and avoid these highly personalized threats. AI will also lead to the development of malware that can adapt and evade defensive measures, posing a significant threat to cybersecurity teams. These make it less likely that the novel attacks mentioned above will be detected before they can do large scale damage. AI-powered, autonomous malware will be capable of changing code and behavior to avoid detection, making it harder for security systems to identify and neutralize it. The emergence of autonomous malware will mark a new era in cyberthreats, where AI-driven attacks become increasingly sophisticated and resilient and put further stress on existing security solutions that rely on a detect and respond model to be effective. Compounding these threats, the problem of deepfakes will significantly worsen. The proliferation of deepfakes will increase misinformation and social engineering, leading to major breaches and higher success rates for scams and theft. As AI technology advances, the creation of realistic deepfakes will become easier and more widespread. This will result in a proliferation of fake videos and audio recordings that can be used to deceive individuals and organizations, undermining trust and security. This will coincide and often be combined with a new generation of AI-driven email, text and social media-based attacks. These attacks are tailored to individuals and nearly indistinguishable from legitimate communication, enabling highly personalized, real-time social-engineering campaigns. Relying on humans as a last line of defense has long been a tenuous approach. Against threats this advanced, that approach collapses. Modern security demands automated, adaptive defenses that remove the burden from individuals.
[3]
Securing Identity and Data in the Age of Autonomous Threats: By Stanley Epstein
How AI-driven attacks will converge on digital identity in 2026 -- and what security teams must do now. Introduction Cybersecurity is entering a decisive phase. In 2026, the most dangerous threats will no longer be slow, manual, or purely human-directed. They will be autonomous, adaptive, and capable of operating at machine speed. These threats will not merely exploit software vulnerabilities; they will exploit identity as the primary control plane and data as the ultimate target. As artificial intelligence becomes embedded in both enterprise systems and adversarial tooling, the intersection of identity, data, and AI-enabled attacks will redefine risk. Credentials will be harvested, privileges will be silently escalated, and sensitive data will be exfiltrated with minimal human involvement. Security teams must therefore rethink protection, detection, and response -- not as isolated controls, but as a continuously adaptive system. This article examines how autonomous threats are evolving, why identity and data are at the center of this shift, and what practical actions security teams can take today to mitigate exposure before machine-speed attacks become the norm. Defining the New Threat Landscape Autonomous threats refer to attack mechanisms that can initiate, adapt, and propagate without continuous human control. Unlike traditional malware or scripted attacks, these systems can observe environments, make decisions, and alter tactics in real time. Advances in generative AI, reinforcement learning, and automation frameworks are accelerating this shift, as documented by the World Economic Forum's work on AI-driven cyber risk. Identity, in this context, includes human users, service accounts, APIs, workloads, and machine identities. As cloud and SaaS adoption expands, identity has replaced the network perimeter as the primary security boundary, a point consistently emphasised by Zero Trust frameworks such as those from NIST (NIST SP 800-207). Data encompasses structured and unstructured information across cloud platforms, endpoints, AI models, and third-party environments. Data is both the target and the fuel: stolen data trains future attacks, while compromised data pipelines poison AI systems themselves. AI-enabled threats combine these elements by using machine learning to optimise credential abuse, privilege escalation, lateral movement, and data extraction. By 2026, attackers will increasingly deploy systems that can scan identity graphs, predict weak controls, and exploit misconfigurations faster than human defenders can respond. How AI-Assisted Attacks Target Identity and Data Identity attacks are no longer limited to phishing emails or brute-force login attempts. Autonomous systems can now generate context-aware phishing messages, mimic writing styles, and adapt in real time based on user responses, as observed in recent threat intelligence reports from Microsoft Security (Microsoft Digital Defense Report). Once credentials are compromised, AI-assisted attackers map privilege relationships across cloud platforms, SaaS tools, and directory services. This identity graph analysis enables them to identify non-obvious escalation paths, such as dormant service accounts with excessive permissions or OAuth tokens granted years earlier and never reviewed. Data attacks follow quickly. Instead of noisy bulk exfiltration, autonomous threats favour low-and-slow extraction, selectively accessing high-value datasets that evade traditional data loss prevention tools. In AI-enabled environments, attackers may also target training data or inference pipelines, corrupting outputs in ways that are difficult to detect, a risk increasingly discussed in research on AI supply-chain security. The key insight is that identity compromise is no longer an intermediate step. It is the primary objective because identity unlocks data at scale. High-Risk Pathways to Account Compromise and Data Exposure Most successful breaches still exploit known weaknesses, but autonomous threats amplify their impact. Excessive permissions, standing administrative access, and unmanaged machine identities create fertile ground for rapid compromise. Cloud environments often contain thousands of identities created by automation, DevOps pipelines, and third-party integrations, many of which persist long after their original purpose has ended. Token-based access presents another high-risk pathway. API keys, refresh tokens, and service credentials are frequently stored in code repositories or CI/CD tools. Autonomous attackers can scan repositories at scale, correlate exposed secrets with cloud assets, and immediately activate them, as highlighted in GitHub's security research (GitHub Security Lab). Data exposure increasingly results not from perimeter failure but from legitimate access used illegitimately. Once an attacker controls an identity with valid permissions, traditional controls often fail to distinguish malicious activity from normal usage. This is where machine-speed attacks gain their advantage: they operate within the rules of the system. Reducing Privilege Sprawl and Limiting Blast Radius Reducing exposure begins with accepting that compromise is inevitable, but catastrophe is optional. Privilege sprawl -- where identities accumulate permissions over time -- is one of the most exploitable conditions in modern environments. Continuous privilege review, just-in-time access, and time-bound credentials are therefore essential, as recommended by cloud security best practices from providers such as Google Cloud (BeyondCorp). Limiting blast radius means designing identity and data access so that no single compromise grants unrestricted reach. This requires segmenting access not just by role, but by context, sensitivity, and duration. Data access policies should align with identity assurance levels, ensuring that high-risk identities cannot access high-value data without additional verification. Importantly, machine identities must be treated with the same rigour as human users. Service accounts, bots, and workloads should have narrowly defined permissions and continuous monitoring, as they are prime targets for autonomous exploitation. Detecting and Responding to Machine-Speed Threats Traditional security operations struggle against autonomous attacks because alerts arrive faster than analysts can triage them. Detection must therefore shift from static rules to behavioural baselines. Identity behaviour analytics, enriched with AI, can identify subtle deviations in login patterns, access timing, and data usage that signal automated abuse. Response must also be automated. When a system detects anomalous behaviour from an identity, it should be able to revoke tokens, reduce privileges, or require step-up authentication within seconds, not hours. This approach aligns with emerging security orchestration and automated response (SOAR) models discussed by Gartner. Crucially, detection and response must be unified across identity and data layers. An alert about unusual data access is meaningless if it is not correlated with identity risk signals. Autonomous threats exploit gaps between tools; defenders must close them through integration and shared context. Conclusion By 2026, the convergence of identity, data, and AI-enabled threats will fundamentally alter the cybersecurity landscape. Autonomous attackers will exploit identity as code, permissions as pathways, and data as both target and weapon. Security teams that rely on static controls and manual processes will struggle to keep pace. The path forward is not about chasing every new attack technique but about strengthening the foundations: controlling identity sprawl, limiting blast radius, and enabling detection and response at machine speed. Organisations that treat identity and data security as inseparable will be best positioned to withstand autonomous threats -- because they will have designed systems that assume intelligence on both sides. MY MUSINGS I believe we are approaching a moment where cybersecurity stops being primarily about preventing access and becomes about managing behaviour under uncertainty. Autonomous threats force us to admit that identity will be compromised and data will be touched. The real question is how much, how fast, and how far. Are we investing enough in understanding our identity graphs, or are we still relying on static role definitions created years ago? Are we comfortable letting machines make defensive decisions at machine speed, even if that means occasional disruption? And perhaps most importantly, are we prepared for a world where AI attacks our AI-driven defences? I would be very interested to hear how others are thinking about these challenges. Where do you see the biggest gaps today -- in identity governance, data protection, or automated response? And what worries you most as we move toward 2026?
[4]
Zscaler CEO sounds alarm on AI agents
Zscaler (ZS) CEO Jay Chaudhry told CNBC that he feels AI agents have supercharged cyberattacks, and at a pace that's far quicker than most companies can respond. Enterprises have been sluggish in adapting to the AI agent threat, a bigger risk than the technology itself. For the most part, generative AI has been mostly about large language models (like GPT-3.5 or GPT-4), but now the focus shifts to autonomous systems that can reason, decide, and act on their accord. These "agentic" models don't wait around for prompts. In fact, they have the ability to execute multi-step tasks, and now that robust capability is in play on both ends of the cybersecurity battle. I've covered the tech space long enough to know that virtually every major innovation follows a similar pathway. At some point in a technology's cycle, the downside risks become impossible to ignore. Chaudhry argues that AI agents are entering this phase quietly and swifter than most bigwigs realize. For attackers, AI agents continue to lower the skill barrier and scale. For defenders, they continue to compress response times while exposing major weaknesses in patchwork security systems. The data throw weight behind his arguments. In fact, a recent CrowdStrike survey found that 76% of organizations struggle to keep up with the speed and complexities of AI-led attacks. Moreover, 48% of leaders in the security space rank AI-powered attacks as their top ransomware threat. For these reasons, Chaudhry feels customers see such developments taking shape in real time, even if Mr. Market hasn't fully priced them in yet. Photo by Bloomberg on Getty Images AI agents mark the shift from tools to autonomy AI agents are less like chatbots and more like junior assistants that can work without constant supervision. Traditionally, AI chatbots wait for instructions, answer questions, and then stop. On the flip side, an AI agent gets an objective and is then tasked with figuring out the next steps on its own. So an agent can plan, act, and check its work. It can also browse through the web, write code, and pull data every time something needs changing. That shift unlocks massive long-term productivity gains, which is why the tech punditry believes AI agents can become a massive market. Tech experts' predictions for AI agents market * According to MarketsandMarkets, the AI agents market is forecasted to grow from $7.84 billion in 2025 to $52.62 billion by 2030 (46.3% CAGR). * According to Mordor Intelligence, the agentic AI market is set to grow from $6.96 billion in 2025 to $42.56 billion by 2030 (43.61% CAGR). * According to Grand View Research, AI agents could potentially jump to $182.97 billion by 2033, up substantially from $7.63 billion in 2025 (49.6% CAGR). AI agents are forcing a security reset In the CNBC interview mentioned earlier, Chaudhry zeroed in on AI agents, particularly on their speed and scale, as well as the growing gap between attackers and defenders. Chaudhry feels AI agents are catalyzing the "franchising" of cybercrime. Tasks that once required skilled hackers can now be easily automated and executed in seconds. That worrying shift, he warns, constricts response times to the point where traditional approaches might simply break down. That's where Zscaler fits into the overall picture. With AI agents proliferating and threats intensifying, Chaudhry believes that cybersecurity has become more critical than ever. He argues that only a unified, cloud-based platform can protect users, applications, and data in real time. That approach is mission-critical at a time when breach volumes are growing at a staggering pace. A recent Verizon Data Breach Investigations Report analyzed more than 22,000 incidents and 12,195 confirmed breaches in 2025, underscoring the razor-thin margins enterprises now have. Major cybersecurity attacks in 2025 * A 16 billion-password "mega leak" turned out to be the largest credential exposure ever, essentially years of stolen logins compiled into a single cache, reported Gulf News. * The Salesforce-Drift supply chain breach involved hackers compromising a major third-party SaaS application by stealing authorization tokens, exposing nearly 1.5 billion CRM records, according to UpGuard. * A single major ransomware attack on UnitedHealth's Change Healthcare disrupted U.S. health care systems, Reuters said, while exposing data linked to a whopping 192.7 million people. TheStreet This story was originally published January 4, 2026 at 10:47 AM.
[5]
Why 2026 demands a new paradigm for enterprise security readiness
Security experts warn that the use of generative AI (GenAI) to launch faster and stealthier cyberattacks will become the norm in 2026. As a result, cyberattacks that previously took weeks to coordinate will now be executed in a matter of hours. Also, the growing integration of GenAI and agentic AI with enterprise applications will trigger more prompt injection attacks, while application programming interface (API) attacks will surpass web-based attacks. Last year, security researchers found several new malware prototypes crafted with the help of GenAI. The most worrying of them was PromptLock, which used hardcoded prompts to exploit the stochasticity (inherent randomness) of an open-source large language model (LLM) and generate unique payloads that signature-based tools could not detect. Simultaneously, threat actors such as FunkSec were found to be using dark GenAI models such as GhostGPT and HackerGPT to automate code obfuscation and create more sophisticated versions of existing malware. "AI is fundamentally changing the economics of cyberattacks. Adversaries are no longer scaling through manpower, but rather through automation," said Reuben Koh, Director of Security Technology and Strategy at Akamai. Attila Torok, Chief Information Security Officer at GoTo, points out that in 2026 enterprises will face a security landscape that is "at once familiar and entirely new." He adds that ransomware and operational downtime will remain persistent threats, but the emergence of fake AI platforms and autonomous malicious agents adds a new layer of social engineering. According to Gartner, by 2027 more than 40% of AI-related data breaches worldwide will involve malicious use of GenAI. Rise in API attacks API-based attacks will surpass web-based attacks as adoption of API based ecosystems is expected to grow across critical sectors such as banking, retail and public services, warned Akamai. In 2025, more than 80% of organizations in the APAC region faced at least one API security incident and nearly 66% of the firms lack visibility into their API inventory, claims Akamai. This API blind spot, caused by shadow or deprecated APIs, combined with AI-powered automation makes it easier for attackers to exploit vulnerable APIs at scale. In API attacks, threat actors look for vulnerabilities to manipulate the intended function of APIs and gain unauthorized access to data passing through them. According to Akamai's State of Apps and API Security 2025 report, API security incidents triggered by authentication and authorization flaws increased by 32%. The API landscape has expanded significantly in the last few years due to growing use of cloud, AI, and microservices. Cloudflare claims that more than 50% of all Internet traffic on their network is API-related. Gartner forecasts that in 2026 more than 30% of the growing demand for APIs will come from AI and applications using LLMs. Any oversight on part of AI companies to secure APIs and API keys can put their customers at risk. For instance, in 2025, Chinese AI startup DeepSeek left two ClickHouse databases exposed due to a misconfiguration that made storage endpoints accessible to anyone on the Internet. This left millions of chat logs, API keys and metadata exposed. Proliferation of ransomware, attacks on critical infra According to Akamai, attacks on critical sectors such as finance, healthcare, and retail will intensify further as ransomware will become fully commoditized in 2026. Ransomware-as-a-service (RaaS) and AI-powered vibe hacking will lead to proliferation of ransomware attacks. Researchers at Check Point Software have found that ransomware groups like FunkSec are offering RaaS to small-time attackers who usually do not have the resources or skills to launch a sophisticated ransomware attack. Until now, ransomware attacks are aimed at large organizations with the objective of encrypting and exfiltrating data, followed by a demand for a multi-million-dollar ransom. RaaS is making it more widespread, and the target now includes small businesses as well as individuals with ransom demand of a few thousand dollars. Further, experts warn that double extortion (encryption and theft) is now expected to expand into multi-stage extortion involving threats to CXOs, supply chain partners and alerting regulators. "Ransomware will also get more personal. It will not just lock systems but try to damage reputation and trust. This will force organisations to secure data at every point, from devices to cloud apps. We will also see more risks from trusted partners and insiders, which means protection can't stop at the network. Security must follow the data wherever it goes," said Srinivas Shekar, CEO and Co-Founder of Pantherun Technologies. Security researchers at Kaspersky have warned that cyberattacks on critical infrastructure providers in India will increase in 2026 along with state-sponsored espionage campaigns. "Geopolitics will remain the key driver for advanced persistent threats (APT), more destructive attacks like defacement, data leak, ransomware with politicized messaging, DDoS, and possibly more cyber operations tied to diplomatic incidents," said Saurabh Sharma, Lead Security Researcher for GReAT at Kaspersky. Prompt injection and risks from AI agents According to a Gartner report, 62% of organizations have faced a deepfake attack using social engineering, while 32% have noticed prompt-injection attacks on GenAI applications in the last 12 months. Most LLms are vulnerable to prompt injection attacks, in which attackers manipulate them to bypass safeguards and share sensitive information with cyber attackers. Gartner found that 29% have faced at least one attack on AI applications in 2025. Google's Threat Intelligence team has warned that enterprise AI systems will see an increase in targeted prompt infection attacks. They added that use of GenAI for social engineering attacks will also accelerate this year. Use of AI-driven voice cloning will lead to more hyper realistic impersonations of CXOs. Security experts also warned that the growing adoption of AI agents will widen the attack surface further and require firms to effectively map their AI ecosystem. They can also be manipulated using prompt injection to leak company data. Unlike GenAI applications, AI agents have autonomy to act on their own. However, risk from them can be minimized by treating AI agents like any other worker and restricting their access to sensitive information using identify and access management (IAM) solutions. Shadow AI is another concern that firms will have to increasingly contend with as new tools with new features will continue to entice workers. According to the IBM Cost of Data Breach report, security incidents involving shadow AI accounted for 20% of breaches in 2025. How enterprises should pivot in 2026 Security experts are in agreement that firms using AI and automation are better positioned to wade in the AI-powered threat landscape. IBM's Cost of Data Breach report also shows that security teams using AI and automation managed to reduce their breach times by 80 days while also lowering their average breach costs by $1.9 million in comparison to organizations that didn't use them. They also found that the average time taken by firms to detect and contain a breach fell to 241 days from 287-day peak in 2021. "In 2026, security teams need to operate at the same velocity as the attackers by detecting, analyzing, and containing threats in real time. This starts with modernizing API governance, investing in automated threat containment, and strengthening resilience across supply chains," said Koh, adding that organizations that make this shift early will be able to protect customers and avoid business disruptions. Experts emphasize that true cyber resilience will come from strategy and culture rather than just tools. Rohit Aradhya, VP and MD, App Security Engineering at Barracuda Networks, argues that when AI becomes part of how you detect, respond and learn, it transforms operations and ceases to be just an add-on. "It becomes a force multiplier and helps to address sophisticated AI driven ransomware attacks." However, the ultimate defense lies in a "security-aware culture of learning, agility, adaptability and purpose driven talent." Sunil Sharma, MD and VP of Sales (India & SAARC) at Sophos, noted that recent cyber incidents serve as a critical reminder for enterprises to move from reactive to proactive stance. Lasting resilience can be achieved through "layered threat detection, continuous monitoring, and robust incident response, supported by risk-aware governance, regular audits, and a culture that elevates cybersecurity to a boardroom priority," added Sharma.
Share
Share
Copy Link
Security leaders from Palo Alto Networks and Zscaler are sounding alarms about AI agents becoming the new insider threat in 2026. With 40% of enterprise applications expected to integrate task-specific AI agents by year's end, these autonomous systems could gain privileged access to sensitive data while remaining vulnerable to prompt injection attacks and exploitation at machine speed.
AI agents are poised to become the most significant insider threat facing enterprises in 2026, according to Wendi Whitmore, Chief Security Intelligence Officer at Palo Alto Networks. The surge in autonomous AI systems is creating intense pressure on security teams racing to evaluate and deploy these technologies while ensuring adequate protection measures are in place
1
. Gartner estimates that 40 percent of all enterprise applications will integrate with task-specific AI agents by the end of 2026, up from less than 5 percent in 20251
. This explosive growth presents both opportunities and risks that security professionals must navigate carefully.
Source: The Register
The challenge stems from AI agents receiving privileged access to sensitive data and systems based on their configurations and permissions. Jay Chaudhry, CEO of Zscaler, told CNBC that AI agents have supercharged cyberattacks at a pace far quicker than most companies can respond, with enterprises proving sluggish in adapting to this emerging threat
4
. A recent CrowdStrike survey found that 76% of organizations struggle to keep up with the speed and complexities of AI-led attacks, while 48% of security leaders rank AI-powered attacks as their top ransomware threat4
.One of the most pressing cybersecurity threats involves what Whitmore describes as the "superuser problem"
1
. This occurs when autonomous AI systems are granted broad permissions, creating a superuser that can chain together access to sensitive applications and resources without security teams' knowledge or approval. The principle of least privilege—limiting access to only what's needed to complete a task—becomes equally critical for AI agents as it is for human users.
Source: Fast Company
The risk extends to what Whitmore calls the "doppelganger" concept, where task-specific AI agents approve transactions or review contracts that would otherwise require C-suite level manual approvals
1
. An attacker could manipulate these autonomous systems to approve unwanted wire transfers or force an AI agent to act with malicious intent during mergers and acquisitions scenarios. By using a single, well-crafted prompt injection or exploiting a tool misuse vulnerability, adversaries now have an autonomous insider at their command, one that can silently execute trades, delete backups, or exfiltrate entire customer databases1
.Prompt injection attacks represent an ongoing threat with no fix in sight, and researchers have repeatedly demonstrated their effectiveness throughout 2025. "It's probably going to get a lot worse before it gets better," Whitmore warned
1
. Security experts warn that the use of generative AI to launch faster and stealthier cyberattacks will become the norm in 2026, with attacks that previously took weeks to coordinate now executed in hours5
.The emergence of autonomous malware marks a new era in AI-driven threats. These sophisticated programs can adapt and evade defensive measures by changing code and behavior to avoid detection, making it harder for security systems to identify and neutralize them
2
. In 2025, security researchers discovered PromptLock, a malware prototype that used hardcoded prompts to exploit the inherent randomness of open-source large language models and generate unique payloads that signature-based tools could not detect5
.The mass personalization of cyberattacks will disrupt the classical kill chain model that relies on observing and reacting to stop threats. Attackers will leverage AI to understand each business's unique vulnerabilities and craft personalized, novel software for every enterprise
2
. This means organizations will see a massive rise in sophisticated, tailored attacks that are unknown to the majority of their current security tools, creating a race against time to spot and respond before sustaining widespread damage.Chaudhry argues that AI agents are catalyzing the "franchising" of cybercrime, where tasks that once required skilled hackers can now be automated and executed in seconds
4
. This worrying shift constricts response times to the point where traditional approaches might simply break down. Machine speed attacks now operate faster than human defenders can respond, forcing security teams to develop wholly new approaches to preemptively mitigate these highly personalized threats3
.
Source: CXOToday
Digital identity has replaced the network perimeter as the primary security boundary, making identity compromise no longer an intermediate step but the primary objective because it unlocks data at scale
3
. Autonomous systems can now generate context-aware phishing messages, mimic writing styles, and adapt in real time based on user responses. Once credentials are compromised, AI-assisted attackers map privilege relationships across cloud platforms, SaaS tools, and directory services to identify non-obvious escalation paths.API attacks will surpass web-based attacks as adoption of API-based ecosystems grows across critical sectors such as banking, retail, and public services. In 2025, more than 80% of organizations in the APAC region faced at least one API security incident, and nearly 66% of firms lack visibility into their API inventory
5
. Gartner forecasts that in 2026, more than 30% of the growing demand for APIs will come from AI and applications using large language models5
.Related Stories
The proliferation of deepfakes will significantly worsen in 2026, increasing misinformation and social engineering that leads to major breaches and higher success rates for scams and theft
2
. As AI technology advances, the creation of realistic deepfakes becomes easier and more widespread, resulting in fake videos and audio recordings that deceive individuals and organizations. This coincides with a new generation of AI-driven email, text, and social media-based attacks tailored to individuals and nearly indistinguishable from legitimate communication.Relying on humans as a last line of defense has long been a tenuous approach, but against threats this advanced, that approach collapses entirely
2
. Modern security demands automated, adaptive defenses that remove the burden from individuals. According to Gartner, by 2027 more than 40% of AI-related data breaches worldwide will involve malicious use of generative AI5
.Despite these threats, AI agents can help fill the ongoing cyber-skills gap that has plagued security teams for years by correcting buggy code, automating log scans and alert triage, and rapidly blocking security threats
1
. When viewed through the defender lens, agentic capabilities allow security teams to think more strategically about how they defend their networks versus always being caught in reactive situations.Whitmore described how one of Palo Alto Networks' internal security operations center analysts built an AI-based program that indexed publicly known threats against the company's private threat-intel data and analyzed resilience and which security issues were more likely to cause harm
1
. The next step involves categorizing alerts as actionable, auto-close, or auto-remediate, progressing from simple use cases to more complex implementations as confidence in response capabilities grows.Market projections underscore the stakes involved. According to MarketsandMarkets, the AI agents market is forecasted to grow from $7.84 billion in 2025 to $52.62 billion by 2030, representing a 46.3% compound annual growth rate
4
. Security teams must adopt Zero Trust frameworks and ensure that only the least amount of privileges needed to complete a job are deployed for autonomous systems, just as they would for human users. As Ransomware-as-a-Service proliferates and attacks on critical infrastructure intensify, organizations need unified, cloud-based platforms that can protect users, applications, and data in real time against these evolving AI cyberattacks.Summarized by
Navi
[1]
[2]
[3]
[4]
15 Oct 2025•Technology

04 Aug 2025•Technology

11 Nov 2025•Technology

1
Policy and Regulation

2
Business and Economy

3
Technology
