8 Sources
8 Sources
[1]
Identity-First AI Security: Why CISOs Must Add Intent to the Equation
Author: Itamar Apelblat, CEO and Co-Founder, Token Security Not long ago, AI deployments inside the enterprise meant copilots drafting emails or summarizing documents. Today, AI agents are provisioning infrastructure, answering customer support tickets, triaging alerts, approving transactions, writing production code, and so much more. They are no longer passive assistants. They are operators within the enterprise. For CISOs, this shift creates a familiar but amplified problem: access. Every AI agent authenticates to systems and services. It uses API keys, OAuth tokens, cloud roles, or service accounts. It reads data, writes configurations, and calls downstream tools. In other words, it behaves exactly like an identity, because it is one. Yet in many organizations, AI agents are not governed as first-class identities. They inherit the privileges of their creators. They operate under over-scoped service accounts. They are granted broad access just to make sure things work. Once deployed, they often evolve faster than the controls around them. This is the emerging blind spot in AI security. The first step toward closing it is what we call identity-first security for AI: recognizing that every autonomous agent must be governed, audited, and attested just like a human user or machine workload. That means unique identities, defined roles, clear ownership, lifecycle management, access control, and auditability. But here's the hard truth: identity alone is no longer sufficient. Traditional identity and access management (IAM) answers a straightforward question: Who is requesting access? In a human-driven world, that was often enough. Users had roles and job functions. Services had defined scopes. Workflows were relatively predictable. They are dynamic by design. They interpret inputs, plan actions, and call tools based on context. An AI agent that begins with the mission to generate a quarterly report might, if prompted or misdirected, attempt to access systems unrelated to reporting. An infrastructure agent designed to remediate vulnerabilities might pivot to modifying configurations in ways that exceed its original scope. When that happens, identity-based controls don't necessarily stop it from happening. Traditional IAM assumes determinism. A role is granted because a user or service performs a defined function. The scope of action is predictable. AI agents break that assumption. Their objective may be fixed, but the path they take to achieve it is fluid. They reason, chain tools together, and explore alternative actions. Static roles were never designed for actors that decide how to act in real time. If the agent's role allows the action, access is granted, even if the action no longer aligns with the reason the agent was deployed in the first place. This is where intent-based permissioning becomes essential. If identity answers who, intent answers why. Intent-based permissions evaluate whether an agent's declared mission and runtime context justify activating its privileges at that moment. Access is no longer just a static mapping between identity and role. It becomes conditional on purpose. Consider an AI agent responsible for deploying code. In a traditional model, it may have standing permissions to modify infrastructure. In an intent-aware model, those privileges activate only when the deployment is tied to an approved pipeline event and change request. If the same agent attempts to modify production systems outside that context, the privileges do not activate that access. The identity hasn't changed, but the intent, and therefore the authorization, has. This combination addresses two of the most common failure modes we're seeing in AI deployments. First, privilege inheritance. Developers often test agents using their own elevated credentials. Those privileges persist in production environments, creating unnecessary exposure. Treating agents as distinct identities can help eliminate this bleed-through. Second, mission drift. AI agents can pivot mid-run based on prompts, integrations, or adversarial input. Intent-based controls prevent that pivot from turning into unauthorized access. AI agents interact with thousands of APIs, SaaS platforms, and cloud resources. Trying to manage risk by enumerating every permissible action quickly becomes unmanageable. Policy sprawl increases complexity, and complexity erodes assurance. An intent-based model simplifies oversight. Governance shifts from managing thousands of discrete action rules to managing defined identity profiles and approved intent boundaries. Policy reviews focus on whether an agent's mission is appropriate, not whether every individual API call is accounted for in isolation. Audit trails become more meaningful as well. When an incident occurs, security teams can determine not only which agent performed an action, but what intent profile was active and whether the action aligned with its approved mission. That level of traceability is increasingly critical for regulatory scrutiny and board-level accountability. The broader issue is this: AI agents are accelerating faster than traditional access control models were designed to handle. They operate at machine speed, adapt to context, and orchestrate across systems in ways that blur the lines between application, user, and automation. CISOs cannot afford to treat them as just another workload. The shift to agentic AI systems requires a shift in security thinking. Every AI agent must be treated as an accountable identity. And that identity must be constrained not only by static roles, but by declared purpose and operational context. The path forward is clear. Inventory your AI agents. Assign them unique, lifecycle-managed identities. Define and document their approved missions. And enforce controls that activate privileges only when identity, intent, and context align. Autonomy without governance is a massive risk. Identity without intent is incomplete. In the agentic era, understanding who is acting is necessary. Ensuring they are acting for the right reason is what makes agentic AI secure.
[2]
Securing AI infrastructure is critical - here's how to do it
I believe that 2026 will be a defining year for cybersecurity. Sometime during the year, AI-powered threats will have the ability to adapt in real time. This will force organizations to defend against them - and they will need to do it fast. Some of the AI-enabled cyberattacks will be against AI systems, which are already becoming deeply embedded across business operations - from decision-making and automation to customer engagement and critical services. Modern AI infrastructure spans models, training frameworks, data pipelines, RAG architectures, APIs, open-source libraries, development tools, and deployment environments. While large-scale breaches of AI infrastructure have not yet become mainstream, the threat landscape is evolving fast - and the potential impact is severe. The question is no longer if AI infrastructure will be targeted, but how prepared organizations are when it is. What do we mean by AI infrastructure? Before looking at threats, it's important to understand that AI infrastructure comprises these components: * Foundation and fine-tuned models * Training and inference frameworks * Data sources, embeddings, and RAG pipelines * APIs, interfaces, and orchestration layers * Open-source libraries and third-party dependencies * Development, testing, and deployment environments Each of these components represents a potential attack surface - and none exist in isolation. Immediate threat scenarios facing AI systems While AI breaches remain relatively rare today, several realistic and increasingly observed threat scenarios are emerging: * Data poisoning at scale: Attackers manipulate pre-training, fine-tuning, or embedding data to introduce hidden vulnerabilities, biases, or backdoors. These issues may remain dormant until triggered, compromising model integrity and trustworthiness. * Model supply chain compromise: Backdoored foundation models or dependencies are distributed through legitimate channels, exposing organizations that unknowingly integrate them into production systems. * Adversarial attacks: Real-time manipulation of model inputs causes misclassification or incorrect outputs - a serious risk when AI is used in security, finance, or safety-critical environments. When things go wrong: Catastrophic AI threat scenarios The real concern lies in how these threats scale. Here are a few serious scenarios: * Critical infrastructure manipulation: Compromised AI systems controlling power grids, transportation networks, or healthcare environments could make unsafe or malicious decisions. * Widespread misinformation: Poisoned models deployed across multiple organizations could be used to generate consistent, large-scale misinformation, eroding trust and amplifying harm. * Intellectual property theft: Model extraction attacks may expose proprietary algorithms, training data, or sensitive business logic, resulting in long-term competitive and financial damage. These scenarios underline one key truth: AI infrastructure must be treated as mission critical. Why traditional security isn't enough AI environments introduce new risks that traditional security models weren't designed to handle. Increases in adversarial attacks, supply chain compromises, and AI-specific zero-day exploits mean reactive security approaches are no longer sufficient. Securing AI infrastructure requires a defense-in-depth mindset, applied across every layer of the AI lifecycle. The key is treating AI infrastructure as a critical, interconnected system requiring defense-in-depth strategies. With increases in adversarial attacks corrupting AI training data, supply chain attacks targeting AI model updates, and zero-day exploits designed to compromise AI security systems making proactive security measures essential rather than optional. Here's what needs to be done.. Model security: Data pipeline security: RAG pipeline security: Open source & supply chain: Infrastructure security: Operational security: Security must keep pace with AI AI is becoming a powerful force multiplier for organizations and threat actors alike. As the technology matures, so too will the methods used to exploit it. Treating AI infrastructure as a critical, interconnected system - and securing it accordingly - is no longer optional. The organizations that act early will be best positioned to benefit from AI without exposing themselves to unnecessary and potentially catastrophic risk. We've featured the best endpoint protection software. This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
[3]
Nearly two-thirds of companies have lost track of their data just as they're letting AI in through the front door to wander around | Fortune
The extensive research, conducted by S&P Global's 451 Research and commissioned by Thales -- a global technology leader in the cyber -- highlights a troubling disconnect between rapid AI adoption and foundational data control. Across vital markets, including the automotive, energy, finance, and retail industries, businesses say the rapid pace of AI-driven transformation has become their greatest security challenge. As enterprises actively embed AI into their development pipelines, analytics, and customer service workflows, these automated systems are being granted broad access to enterprise data, frequently with fewer controls than those applied to human workers. Consequently, 61% of organizations now explicitly cite AI as their top data security risk. The report comes after a week when the second viral essay about the dire consequences of AI that is a bit too autonomous has rattled markets. Citrini Research's essay on a 2028 hellscape of "ghost GDP" in which radical deflation from AI results in 10% unemployment and a 30%-plus stock correction followed hot on the heels of AI executive Matt Shumer's prediction that "something big" was happening in AI and the workforce wasn't prepared. Although economists and even industry executives cautioned that this was excessive, software stocks have largely continued their selloff. The core of the problem identified in the Thales report aligns with these fears at least in part. It's not necessarily about the threat of rogue, malicious AI born from external actors, but rather the unprecedented level of internal access being granted to these systems as they transition from mere external tools to highly trusted corporate insiders. Enterprises are eagerly embedding AI into their daily workflows, but as they do so, these automated systems are being granted broad access to vast troves of enterprise data, frequently operating with fewer security controls than those traditionally applied to human employees in a standard corporate environment. Sebastien Cano, Senior Vice President of Cybersecurity Products at Thales, emphasized this alarming shift in corporate environments. "Insider risk is no longer just about people. It is also about automated systems that have been trusted too quickly," Cano explained. He warned that when basic security measures like identity governance, access policies, or encryption are weak, "AI can amplify those weaknesses across corporate environments far faster than any human ever could". The research, based on a global survey of 3,120 respondents, was aimed at professionals in security and IT management, excluding respondents with companies having less thatn $100 million in annual revenue. They reported widening data visibility gaps across cloud infrastructures, with only 39% of companies have the ability to fully classify data, and nearly half (47%) of all sensitive cloud data remaining entirely unencrypted. Because these AI systems continuously ingest and act upon information across sprawling cloud and SaaS environments, it becomes incredibly difficult to enforce "least-privilege access" -- the practice of granting only strictly necessary access rights to a system. If a machine's credentials are compromised by a malicious actor, the resulting data exposure could be devastating. Attackers are already exploiting these exact vulnerabilities. Credential theft is now the leading attack technique against cloud management infrastructure, cited by 67% of organizations that have experienced cloud attacks. Simultaneously, 50% of organizations rank secrets management as a top application security challenge, illustrating the immense, growing difficulty of governing machine identities, tokens, and API keys at scale. While companies struggle to rein in their own internal AI systems, malicious actors are leveraging the same technology to launch increasingly sophisticated external attacks. Nearly 60% of companies report experiencing deepfake-driven incidents, and 48% have suffered reputational damage tied to AI-generated misinformation or impersonation campaigns. Furthermore, human error continues to contribute to 28% of data breaches; adding rapid automation into the mix means that small, everyday mistakes can now scale and spread wider than ever before. Despite these escalating, automated threats, security investments are struggling to keep up with the pace of AI-driven access. Only 30% of companies surveyed have dedicated AI security budgets. The majority of organizations (53%) are still relying on traditional security budgets and programs built primarily for human users and perimeter-based defenses. Industry experts emphasize that a fundamental paradigm shift is urgently required. "As AI becomes deeply embedded into enterprise operations, continuous data visibility and protection are no longer optional," stated Eric Hanselman, Chief Analyst at S&P Global 451 Research. For businesses to innovate securely and prevent AI from becoming their newest and most dangerous insider threat, they must fundamentally rethink identity, encryption, and data visibility as the core foundation of their security infrastructure.
[4]
The Human Risk Reckoning: Why security must evolve for an AI-augmented workforce
Security models that were effective a few years ago are now under immense strain because of how rapidly organizations are changing. As we move into 2026, many teams are dealing with a larger and more complex risk landscape. This is largely driven by rapid artificial intelligence (AI) adoption, increased automation, and the continued shift to cloud and collaboration platforms. At the same time, attackers are getting more access to phishing as a service (PhaaS) and other tools which make it easier to launch scalable campaigns even for the most non-technical of criminals. These underlying challenges aren't new. Issues like inconsistent security ownership, uneven controls across systems, and security being bolted on late in the delivery cycle still show up, only now they tend to surface faster, spread further, and carry greater impact. "The AI inflection point" As enterprises adopt AI at scale, it becomes clear that this is not just another threat category. AI represents a fundamental inflection point in risk management. It introduces a dual risk. Internally, employees may overshare sensitive data into AI tools without fully understanding how that information is stored or protected. Externally, cybercriminals are using AI to generate deepfakes, impersonate trusted individuals and scale attacks with unprecedented speed and precision. Although nearly all organizations report taking steps to address AI risk, many employees feel access to approved tools is too slow, overly restrictive or inconsistently governed. At the same time, unapproved usage, or shadow AI, is becoming increasingly common Employees may already be using personal accounts with large language models that fall entirely outside organizational oversight, creating risk vectors that are effectively invisible. The same behaviors that make employees productive with AI can quickly become liabilities without real-time guardrails. This is where the biggest strain is being put on security models to keep up. "A new risk" Historically, organizations have approached people-related security risk primarily through awareness training and teaching employees how to recognize threats and avoid mistakes. That approach is critical as research has shown a 90% increase in cyber incidents stemming from the human element; however, the approach is no longer sufficient on its own. When risk exists quite literally everywhere employees work and communicate, perimeter-focused defenses and annual training cycles are structurally insufficient. This is because today's workplace no longer consists of only people. AI agents are increasingly embedded into critical workflows, operating alongside employees and interacting with sensitive data. While the purely human attack vectors remain, organizations are not applying the same level of behavioral risk training to AI agents as they do to their workforce. The result? A new and largely unmanaged kind of risk. Beneath this growing exposure lies a deeper disconnect between organizations and their employees. Nearly half of employees do not believe the data they handle belongs to the organization. Ambiguous ownership leads to personal rule-making around data sharing, storage and AI usage. Identifying this gap in understanding makes one thing clear: culture, incentives and tooling shape behavior far more effectively than policy documents alone. Human risk is less about rules and more about clarity. When you teach a child to cross a road safely, you teach them all about the green and red signals, which provides them a framework and clarity on how to cross any road they approach at any time in their life. While training humans involves coaching and leadership, newly implemented agentic AI models must build new approaches. Getting this all right under one umbrella will prove a challenge for many organizations going into 2026, but just because something is difficult, it does not mean it shouldn't be done. The reckoning A revealing new study has found that 44% of organizations globally have disciplined employees who had fallen victim to phishing attacks. This prevailing punitive approach to security potentially further undermines outcomes as leadership and employee perspectives are sharply misaligned. Leaders tend to favor discipline and formal consequences, while employees overwhelmingly favor support, coaching and targeted guidance. Punishment-heavy strategies damage trust and weaken long-term resilience. When fear dominates, incident reporting declines, trust erodes and security teams become fatigued. Organizations cannot punish their way to better security behavior. Mechanisms that reduce risk before mistakes happen, rather than reacting after the fact, are essential. Instead of focusing on placing blame, we must work to build a positive security culture. This is where Human Risk Management, or HRM, must be positioned as a core piece of security strategy, instead of a supporting initiative. Cross-platform visibility into risky behaviors and employee-level risk signals should replace broad user categories and assumptions. Building a positive culture needs supportive coaching when risk appears real-time, in fact, studies have found that 'active learning' (or learning by doing) is incredibly effective for retention. This method reinforces and integrates security directly into daily tasks, and people are treated as adaptive participants, not static liabilities. AI systems must be governed in the same way, with behavioral baselines, monitoring and controls that reflect their growing role in the workforce. HRM becomes the connective layer between human behavior, AI usage, and organizational resilience. The direction of travel is clear. Organizations are moving toward people-plus-agent workforces, and the question of security is one of timing, not adoption. To sustain innovation without amplifying risk, security best practices must be embedded into both human and machine systems now. Research already shows that early adopters benefit from lower incident rates, higher trust, and faster, safer AI-driven innovation. The future of cybersecurity belongs to organizations that stop trying to lock people down and start designing systems that help them make better decisions at the moment those decisions are made. We've featured the best encryption software. This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
[5]
AI and deepfakes are proving to be a security nightmare for businesses everywhere
Misconfigured AI can quickly turn into a malicious insider, experts warn * Thales 2026 Data Threat Report says 61% see AI as top data security risk * Enterprises grant AI broad access, creating insider-like risks * 48% report reputational damage from AI-driven misinformation Artificial Intelligence (AI) and deepfakes are proving to be a security nightmare for businesses everywhere, with new research claiming almost two-thirds (61%) of firms see AI as their top data security risk. The Thales 2026 Data Threat Report noted that at the heart of this problem is the challenge of access control and management. Enterprises are increasingly adding AI into workflows, analytics, customer service, and development pipelines. To make it work, they need to grant these tools broad, automated access, turning AI tools into a trusted insider. The issue is that the controls put in place for employees are almost always stricter than those for AI. Threats from the inside and outside Besides being a latent malicious insider, AI can also be a potent malicious outsider. Threat actors are quickly adopting the new tool and today more than half (almost 60% actually) of companies reported experiencing deepfake-driven attacks. In these attacks, crooks use AI-generated fake audio, video, or images, to convincingly impersonate a real person and thus manipulate their victims. In a corporate setting, that could be using voice cloning to trick employees, creating AI-generated video to authorize payments, or fabricating public statements to manipulate stock price, or damage trust. In fact, Thales' paper found 48% reporting reputational damage tied to AI-generated misinformation. Today, some businesses are aware of AI threats, but the majority is not doing much about it. More than half (53%) still depend on traditional security programs built primarily for human users, while less than a third (30%) started dedicating specific budgets to AI security. "Insider risk is no longer just about people. It is also about automated systems that have been trusted too quickly," says Sebastien Cano, Senior Vice President, Cybersecurity Products at Thales. "When identity governance, access policies, or encryption are weak, AI can amplify those weaknesses across corporate environments far faster than any human ever could." Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
[6]
How businesses can stop their AI agents from running amok
Most organizations will by now be familiar with the concept of AI agents - autonomous systems that perceive, make decisions and take actions to achieve specific goals within an environment. In fact, a staggering 82% of organizations are using AI agents today, often across multiple business functions. These agents aren't just passive tools: they're autonomous technology that act, decide, and adapt at a staggering speed and scale. And they're getting more sophisticated by the minute, frequently handling tasks that were once reserved for skilled human oversight. The business value of AI agents is undisputed, but the potential consequences of compromised sensitive data could be devastating, from accessing sensitive systems to sharing data without authorization. Worryingly, less than four in ten organizations are governing AI agents - despite adoption surging. This new reality demands we manage AI agents with the same level of oversight and governance as human users. Let's look at the role that identity security can play in helping organizations to harness AI's intelligence, without losing sight of security or compliance. Putting brakes on the AI 'race car' AI agents can operate independently and learn, adapt, and interact in ways that are hard to predict. Without strong governance, they can introduce serious vulnerabilities into even the most secure environments. That's not to say businesses shouldn't be leveraging AI agents, but they do need to put controls in place to keep their new 'digital workforce' in check. Think of it like brakes on a race car: they're not there to slow you down unnecessarily, but to give you needed control when navigating a difficult course at high speed. At the moment, many businesses are 'driving the car' at breakneck speed, without working brakes. The result? AI agents are spinning out of control - with 80% of organizations reporting that their AI agents have already performed unauthorized actions, including accessing and sharing sensitive information. And, despite the vast majority of tech leaders (92%) recognizing that AI agent governance is crucial to enterprise security, only 44% have implemented relevant policies. Beyond regulatory compliance issues, this creates vulnerabilities affecting the whole supply chain - including employees, partners, and customers with system access - who may receive inaccurate information or, more dangerously, expose access credentials or other data that play into the hands of malicious actors. A closer look at risk management for AI agents With 98% of companies planning to expand AI agent deployments in the next year, enterprises will only become more dependent on this extended digital workforce over the next decade. This explosion of non-human identities, coupled with increasingly sophisticated cyber threats, will require tools that facilitate a more adaptive approach. In the past, a 'castle and moat' approach to security was sufficient. SOC teams were responsible for understanding what was happening on an endpoint: their job was simply to protect perimeters. Now, vulnerabilities can easily explode outwards from within the business itself, if agents are left to move laterally and freely within networks. To prevent an 'identity explosion', organizations need to approach AI agent access rights in the same way they would humans. That means governing them according to their own unique behaviors and risks. Next-gen identity security tools can enable businesses to roll out contextual, precise and adaptive access control policies, where access is purposefully granted when appropriate - and aggressively revoked when not. Imagine an AI agent in the financial sector. It could handle an entire loan origination process - aggregating financial data, analyzing credit history, preparing terms, facilitating underwriting, and communicating with stakeholders. The efficiency is remarkable, but the risks are significant: without proper controls, that same agent could misinterpret data, approve high-risk loans, or inadvertently expose customer information, triggering compliance violations or reputational damage. Businesses can avoid this sort of risk by ensuring that agents can only access selected records or information relevant to a particular case. Through a custom role and profile, the agent would be granted temporary access to records that would disappear following task completion. To minimize risk, the agent could be left without administrative system privileges - for example, access to internal audit logs, executive dashboards or regulatory compliance reports. A contextual, adaptive approach to identity ensures AI agents are continuously monitored, and that their access rights are updated as their roles, behaviors and risk profiles evolve. Securing the digital workforce As adoption of AI agents intensifies, business leaders could be faced with a real headache if they expand their 'digital workforce' before systems are in place to securely keep track of non-human identities. It's clear that the question is no longer just about "who" can access what. It's about "what" is acting inside your environment, "how" it's doing so, and "why." Proper governance means tracking every AI agent's access to sensitive data, assigning clear ownership, and enforcing approval workflows before granting or expanding access. Static, one-size-fits-all approaches to access policies are no longer enough. An adaptive, contextual approach to identity security will form the bedrock for responsible, secure and scalable adoption of AI agents. We've featured the best IT automation software. This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
[7]
AI vs AI: Defense Without Humans in the Loop | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. Seconds later, another system steps in. A defensive AI spots the abnormal pattern, tightens controls and pauses a set of transactions before any money moves or data leaves the company. By the time a human analyst reviews the dashboard, the episode is already over. This is the new operational reality facing enterprise security teams. The most consequential decisions inside corporate networks are increasingly made not by analysts in a security operations center, but by competing artificial intelligence systems acting autonomously. Offensive AI agents probe APIs, manipulate retrieval layers and adapt continuously to countermeasures. Defensive agents triage alerts, isolate workflows and remediate vulnerabilities without waiting for human approval. What once required coordinated attackers and days of reconnaissance now unfolds in automated cycles, often before anyone realizes a conflict has begun. The World Economic Forum reported that 87% of organizations believe AI-related vulnerabilities are increasing risk across their environments. The threat landscape has shifted from AI as a tool to AI as an operation embedded throughout the attack lifecycle. Gartner projects that 17% of cyberattacks will employ generative AI by 2027, signaling that AI-driven techniques are moving from experimentation to mainstream threat capability. The result is compounding scale and variability. Artificial intelligence systems can generate unique attack instances while pursuing the same objective, weakening signature-based detection models that rely on pattern repetition. When each payload or prompt sequence is slightly different, static defenses struggle to keep pace. The attack surface is also expanding beyond traditional endpoints. Microsoft researchers have highlighted how AI integrations themselves can become entry points, particularly through indirect prompt injection. In these scenarios, malicious instructions are embedded in content that enterprise AI systems later ingest, redirecting agent behavior without breaching hardened infrastructure. In response, enterprises and investors are shifting toward autonomous remediation. Bain Capital Ventures and Greylock led a $42 million Series A in Cogent Security, betting that AI agents can compress the gap between vulnerability detection and resolution. The scale of the backlog illustrates the urgency. More than 48,000 new common vulnerabilities and exposures were reported in 2025, per TechTarget, a 162% increase from five years earlier, with attackers often probing new disclosures within minutes. Cogent's model reflects a broader architectural change. Rather than replacing existing tools, it aggregates signals from scanners, asset inventories and cloud security platforms, then uses AI to prioritize and trigger remediation workflows automatically through ticketing and patching systems. "Security teams are drowning in coordination work, chasing down system owners, writing tickets, proving fixes happened," Cogent CEO Vineet Edupuganti told Fortune. The company says customers are resolving their most serious vulnerabilities 97% faster using autonomous workflows. In optimal scenarios, defensive agents remove the need for human intervention on a specific class of vulnerability. In others, they compress triage and coordination, so engineers focus on higher-order judgment. The common thread is speed. Human-speed remediation is no longer sufficient when AI-driven attackers operate in continuous loops. Data quality remains a constraint. Behavioral detection and anomaly classification depend on high-fidelity telemetry and clean baselines. Defensive systems trained on incomplete or noisy data risk generating excessive false positives or missing novel attack paths entirely. At the same time, attackers are increasingly deploying fraudulent AI assistants designed to impersonate legitimate tools and harvest sensitive user information. As PYMNTS reported, these malicious assistants can quietly collect credentials and financial data by exploiting user trust in AI interfaces, reinforcing the need for enterprises to secure not just their networks, but the AI agents themselves.
[8]
Friend or foe? AI: The new cybersecurity threat and solutions
Cyber-attacks have more than doubled worldwide in just four years, from 818 per organization in 2021 to almost 2,000 per organization last year, according to the World Economic Forum (WEF). It's a staggering statistic. And small businesses are particularly exposed, now seven times more likely to report insufficient cyber-resilience than they were in 2022. Whether we like it or not, artificial intelligence (AI) tools have had a big role to play here, not just with the increasing volume of attacks but also the sophistication. Risks are now emerging at every layer of the AI stack, from prompt injection and data leakage to AI-powered bot scraping and deepfakes. As a recent industry report reveals, attackers are now using large language models (LLMs) to craft convincing phishing campaigns, write polymorphic malware, and automate social-engineering at scale. The result is a threat environment that learns, adapts, and scales faster than human analysts can respond. What lies beneath the layers? AI systems are built in layers, and each one brings its own weak spots. At the environment layer, which provides computing, networking and storage, the risks resemble those in traditional IT but the scale and complexity of AI workloads make attacks harder to detect. The model layer is where manipulation starts. Prompt injection, non-compliant content generation and data exfiltration are now among the top threats, as highlighted in the OWASP 2025 Top 10 for LLM Applications. The context layer, home to retrieval-augmented generation (RAG) databases and memory stores, has become a prime target for data theft. Meanwhile, at the tools and application layers, over-privileged APIs and compromised AI agents can give attackers the keys to entire workflows. In other words, the attack surface is expanding in every direction, and with it, the need for smarter defenses. The answer isn't to abandon AI but to use AI to secure AI. So a comprehensive security framework needs to span the full AI lifecycle, protecting three essential layers: model infrastructure, the model itself, and AI applications. When security is embedded into business workflows rather than bolted on afterward, organizations gain efficient, low-latency protection without sacrificing convenience or performance. Security teams are already deploying intelligent guardrails that scan prompts for malicious intent, detect anomalous API behavior and watermark generated content for traceability. The latest generation of AI-driven security operations applies multi-agent models to analyze billions of daily events, flag emerging risks in real time and automate first-response actions. According to PwC's Digital Trust Insights 2026 survey, AI now tops the list of investment priorities for Chief Information Security Officers (CISOs) worldwide, a sign that enterprises are finally treating cyber resilience as a learning system, not a static checklist. Threats that lurk in the shadows Yet even as enterprises strengthen their defenses, a new and largely self-inflicted risk is taking shape inside their own networks. It's called shadow AI. In most organizations, employees are using generative tools to summarize reports, write code or analyze customers, often without official approval or data-governance controls. According to one report from Netskope, around 90 percent of enterprises now use GenAI applications, and more than 70 per cent of those tools fall under shadow IT. Every unmonitored prompt or unvetted plug-in becomes a potential leak of sensitive data. Internal analysis across the industry suggests that nearly 45 percent of AI-related network traffic contains sensitive information, from intellectual property to customer records. In parallel, AI-powered bots are multiplying at speed. Within six months, bot traffic linked to data scraping and automated requests has quadrupled. While AI promises smarter, faster operations, it's also consuming ever-greater volumes of confidential data, creating more to defend and more to lose. A safety-belt for AI Governments and regulators are beginning to recognize the scale of the challenge. Many AI governance rules all point to a future where organizations will be expected to demonstrate not only compliance, but continuous visibility over their AI systems. Security postures will need to account for model training, data provenance, and the behavior of autonomous agents, not just network traffic or access logs. For many, that means embedding security directly into the development pipeline, adopting zero-trust architectures, and treating AI models as living assets that require constant monitoring. Looking ahead, the battle lines are already being redrawn. The next phase of cybersecurity will depend on a dual engine - one that protects AI systems while also using AI to detect and neutralize threats. As machine-learning models evolve, so too must the defenses that surround them. Static rules and manual responses can't keep pace with attackers who automate creativity and exploit speed. What's needed is an ecosystem that learns as fast as it defends. That shift is already underway. Multi-agent security platforms now coordinate detection, triage and recovery across billions of daily events. Lightweight, domain-specific models filter out the noise, while larger reasoning models identify previously unseen attack patterns. It's an intelligence pipeline that mirrors the adversaries, only this one's built for defense. The application of intelligence The future of digital security will hinge on collaboration between human insight and machine intuition. In practical terms, that means re-training the workforce, as much as re-architecting the infrastructure. Analysts who can interpret AI outputs, data scientists who understand risk, and policymakers who build trust through transparency are very much needed. The long game is about confidence, not just resilience. Confidence that the systems powering modern life are learning to protect themselves. Because ultimately, AI isn't the villain of this story. The same algorithms that make attacks more potent can also make protection more precise. The question for business leaders everywhere is whether they'll invest fast enough to let intelligence, not inertia, define the next chapter of cybersecurity. We've featured the best endpoint protection software. This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Share
Share
Copy Link
A new global report reveals that 61% of organizations now identify AI as their primary data security risk, as AI agents gain broad access to enterprise systems with fewer controls than human workers. The research exposes a troubling gap: companies are granting AI tools insider-level privileges while 47% of sensitive cloud data remains unencrypted and nearly two-thirds have lost track of their data entirely.
AI agents have evolved far beyond passive assistants drafting emails or summarizing documents. Today, these autonomous systems provision infrastructure, triage security alerts, approve transactions, and write production code across enterprise environments
1
. This operational shift creates what CISOs recognize as a familiar but amplified challenge: access control. Every AI agent authenticates to systems using API keys, OAuth tokens, cloud roles, or service accounts, behaving exactly like an identity because it is one1
. Yet in many organizations, AI agents are not governed as first-class identities, instead inheriting privileges from their creators or operating under over-scoped service accounts1
.
Source: TechRadar
Global research by S&P Global's 451 Research, commissioned by Thales and surveying 3,120 security and IT professionals, reveals that 61% of organizations now explicitly cite AI as their top data security risk
3
5
. The core problem stems from enterprises eagerly embedding AI into daily workflows while granting these automated systems broad access to vast troves of enterprise data, frequently with fewer security controls than those applied to human employees3
. Sebastien Cano, Senior Vice President of Cybersecurity Products at Thales, emphasized this alarming shift: "Insider risk is no longer just about people. It is also about automated systems that have been trusted too quickly"3
5
.
Source: TechRadar
The research exposes widening data visibility gaps across cloud infrastructures, with only 39% of companies able to fully classify data and nearly half (47%) of all sensitive cloud data remaining entirely unencrypted
3
. Perhaps most troubling, nearly two-thirds of organizations have lost track of their data just as they're letting AI agents wander through enterprise systems3
. Because AI agents continuously ingest and act upon information across sprawling cloud and SaaS environments, enforcing least-privilege access becomes incredibly difficult. When machine credentials are compromised by malicious actors, the resulting data exposure could prove devastating3
.Modern AI infrastructure spans models, training frameworks, data pipelines, RAG architectures, APIs, open-source libraries, development tools, and deployment environments
2
. Each component represents a potential attack surface. AI-powered threats are expected to adapt in real time during 2026, forcing organizations to defend against them rapidly2
. Immediate threat scenarios include data poisoning at scale, where attackers manipulate training data to introduce hidden vulnerabilities or backdoors, and supply chain compromise through backdoored foundation models distributed via legitimate channels2
. Adversarial attacks that manipulate model inputs in real time pose serious risks when AI operates in security, finance, or safety-critical environments2
.Attackers are already exploiting access vulnerabilities. Credential theft is now the leading attack technique against cloud management infrastructure, cited by 67% of organizations that have experienced cloud attacks
3
. Simultaneously, 50% of organizations rank secrets management as a top application security challenge, illustrating the immense difficulty of governing machine identities, tokens, and API keys at scale3
. Traditional identity and access management answers who is requesting access, but AI agents break the assumption of determinism that IAM was built upon1
.Identity-first security for AI requires recognizing that every autonomous agent must be governed, audited, and attested just like a human user or machine workload
1
. However, identity governance alone proves insufficient. AI agents are dynamic by design, interpreting inputs, planning actions, and calling tools based on context1
. This is where intent-based permissioning becomes essential, evaluating whether an agent's declared mission and runtime context justify activating its privileges at that moment1
. This approach addresses two common failure modes: privilege inheritance, where developers test agents using their own elevated credentials that persist in production, and mission drift, where AI agents pivot mid-run based on prompts or adversarial input1
.
Source: BleepingComputer
Related Stories
While companies struggle with internal AI access, malicious actors leverage the same technology for sophisticated external attacks. Nearly 60% of companies report experiencing deepfake-driven attacks, and 48% have suffered reputational damage tied to AI-generated misinformation or impersonation campaigns
3
5
. Deepfakes use AI-generated fake audio, video, or images to convincingly impersonate real people, manipulating victims through voice cloning to trick employees, creating AI-generated video to authorize payments, or fabricating public statements5
. Human error continues contributing to 28% of data breaches, but adding rapid automation means small mistakes can now scale wider than ever3
.Despite escalating automated threats, security investments struggle to keep pace with AI-driven access. Only 30% of companies surveyed have dedicated AI security budgets, while the majority (53%) still rely on traditional security budgets and programs built primarily for human users and perimeter-based defenses
3
5
. Eric Hanselman, Chief Analyst at S&P Global 451 Research, stated that "as AI becomes deeply embedded into enterprise operations, continuous data visibility and protection are no longer optional"3
. Organizations must treat AI infrastructure as mission critical and apply defense-in-depth strategies across every layer of the AI lifecycle2
.The workplace no longer consists only of people. AI agents are increasingly embedded into critical workflows, operating alongside employees and interacting with sensitive data
4
. Organizations are not applying the same level of behavioral risk training to AI agents as they do to their workforce, creating a new and largely unmanaged kind of insider risk4
. Human risk management must be positioned as a core piece of security strategy rather than a supporting initiative4
. Organizations that act early to implement identity governance, access policies, and encryption for both human and machine identities will be best positioned to benefit from AI without exposing themselves to catastrophic risk2
3
.Summarized by
Navi
[1]
02 Jan 2026•Technology

12 Mar 2025•Technology

27 Feb 2026•Technology

1
Technology

2
Technology

3
Business and Economy
