AI Security Crisis: 61% of Companies See AI as Top Data Threat as Agents Gain Insider Access

8 Sources

Share

A new global report reveals that 61% of organizations now identify AI as their primary data security risk, as AI agents gain broad access to enterprise systems with fewer controls than human workers. The research exposes a troubling gap: companies are granting AI tools insider-level privileges while 47% of sensitive cloud data remains unencrypted and nearly two-thirds have lost track of their data entirely.

AI Agents Transform from Tools to Trusted Insiders

AI agents have evolved far beyond passive assistants drafting emails or summarizing documents. Today, these autonomous systems provision infrastructure, triage security alerts, approve transactions, and write production code across enterprise environments

1

. This operational shift creates what CISOs recognize as a familiar but amplified challenge: access control. Every AI agent authenticates to systems using API keys, OAuth tokens, cloud roles, or service accounts, behaving exactly like an identity because it is one

1

. Yet in many organizations, AI agents are not governed as first-class identities, instead inheriting privileges from their creators or operating under over-scoped service accounts

1

.

Source: TechRadar

Source: TechRadar

The Data Security Risk Reaches Critical Levels

Global research by S&P Global's 451 Research, commissioned by Thales and surveying 3,120 security and IT professionals, reveals that 61% of organizations now explicitly cite AI as their top data security risk

3

5

. The core problem stems from enterprises eagerly embedding AI into daily workflows while granting these automated systems broad access to vast troves of enterprise data, frequently with fewer security controls than those applied to human employees

3

. Sebastien Cano, Senior Vice President of Cybersecurity Products at Thales, emphasized this alarming shift: "Insider risk is no longer just about people. It is also about automated systems that have been trusted too quickly"

3

5

.

Source: TechRadar

Source: TechRadar

Data Visibility Gaps Expose Critical Vulnerabilities

The research exposes widening data visibility gaps across cloud infrastructures, with only 39% of companies able to fully classify data and nearly half (47%) of all sensitive cloud data remaining entirely unencrypted

3

. Perhaps most troubling, nearly two-thirds of organizations have lost track of their data just as they're letting AI agents wander through enterprise systems

3

. Because AI agents continuously ingest and act upon information across sprawling cloud and SaaS environments, enforcing least-privilege access becomes incredibly difficult. When machine credentials are compromised by malicious actors, the resulting data exposure could prove devastating

3

.

Securing AI Infrastructure Demands New Approaches

Modern AI infrastructure spans models, training frameworks, data pipelines, RAG architectures, APIs, open-source libraries, development tools, and deployment environments

2

. Each component represents a potential attack surface. AI-powered threats are expected to adapt in real time during 2026, forcing organizations to defend against them rapidly

2

. Immediate threat scenarios include data poisoning at scale, where attackers manipulate training data to introduce hidden vulnerabilities or backdoors, and supply chain compromise through backdoored foundation models distributed via legitimate channels

2

. Adversarial attacks that manipulate model inputs in real time pose serious risks when AI operates in security, finance, or safety-critical environments

2

.

Credential Theft and Machine Identity Challenges

Attackers are already exploiting access vulnerabilities. Credential theft is now the leading attack technique against cloud management infrastructure, cited by 67% of organizations that have experienced cloud attacks

3

. Simultaneously, 50% of organizations rank secrets management as a top application security challenge, illustrating the immense difficulty of governing machine identities, tokens, and API keys at scale

3

. Traditional identity and access management answers who is requesting access, but AI agents break the assumption of determinism that IAM was built upon

1

.

Intent-Based Permissioning Addresses Mission Drift

Identity-first security for AI requires recognizing that every autonomous agent must be governed, audited, and attested just like a human user or machine workload

1

. However, identity governance alone proves insufficient. AI agents are dynamic by design, interpreting inputs, planning actions, and calling tools based on context

1

. This is where intent-based permissioning becomes essential, evaluating whether an agent's declared mission and runtime context justify activating its privileges at that moment

1

. This approach addresses two common failure modes: privilege inheritance, where developers test agents using their own elevated credentials that persist in production, and mission drift, where AI agents pivot mid-run based on prompts or adversarial input

1

.

Source: BleepingComputer

Source: BleepingComputer

Deepfake-Driven Attacks Escalate External Threats

While companies struggle with internal AI access, malicious actors leverage the same technology for sophisticated external attacks. Nearly 60% of companies report experiencing deepfake-driven attacks, and 48% have suffered reputational damage tied to AI-generated misinformation or impersonation campaigns

3

5

. Deepfakes use AI-generated fake audio, video, or images to convincingly impersonate real people, manipulating victims through voice cloning to trick employees, creating AI-generated video to authorize payments, or fabricating public statements

5

. Human error continues contributing to 28% of data breaches, but adding rapid automation means small mistakes can now scale wider than ever

3

.

Security Investment Lags Behind AI Adoption

Despite escalating automated threats, security investments struggle to keep pace with AI-driven access. Only 30% of companies surveyed have dedicated AI security budgets, while the majority (53%) still rely on traditional security budgets and programs built primarily for human users and perimeter-based defenses

3

5

. Eric Hanselman, Chief Analyst at S&P Global 451 Research, stated that "as AI becomes deeply embedded into enterprise operations, continuous data visibility and protection are no longer optional"

3

. Organizations must treat AI infrastructure as mission critical and apply defense-in-depth strategies across every layer of the AI lifecycle

2

.

Human Risk Management Evolves for AI-Augmented Workforces

The workplace no longer consists only of people. AI agents are increasingly embedded into critical workflows, operating alongside employees and interacting with sensitive data

4

. Organizations are not applying the same level of behavioral risk training to AI agents as they do to their workforce, creating a new and largely unmanaged kind of insider risk

4

. Human risk management must be positioned as a core piece of security strategy rather than a supporting initiative

4

. Organizations that act early to implement identity governance, access policies, and encryption for both human and machine identities will be best positioned to benefit from AI without exposing themselves to catastrophic risk

2

3

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo