3 Sources
3 Sources
[1]
Enterprises neglect AI security - and attackers have noticed
IBM report shows a rush to embrace technology without safeguarding it, and as for governance... Organizations rushing to implement AI are neglecting security and governance, IBM claims, with attackers already taking advantage of lax protocols to target models and applications. The findings come from Big Blue's Cost of a Data Breach Report 2025 report, which shows that AI-related exposures currently make up only a small proportion of the total, but these are anticipated to grow in line with greater adoption of AI in enterprise systems. Based on data reported by 600 organizations globally between March 2024 and February 2025, IBM says 13 percent of them flagged a security incident involving an AI model or AI application that resulted in an infraction. Almost every one of those breached organizations (97 percent) indicated it did not have proper AI access controls in place. About a third of those that experienced a security incident involving their AI suffered operational disruption and saw criminals gain unauthorized access to sensitive data, while 23 percent said they incurred financial loss as a result of the attack, with 17 percent suffering reputational damage. Supply chain compromise was the most common cause of those breaches, a category that includes compromised apps, application programming interfaces (APIs), and plug-ins. The majority of organizations that reported an intrusion involving AI said the source was a third-party vendor providing software as a service (SaaS). IBM's report draws particular attention to the danger of unsanctioned or so-called shadow AI, which refers to the unofficial use of these tools within an organization, without the knowledge or approval of the IT or data governance teams. Because shadow AI may go undetected by the organization, there is an increased risk that attackers will exploit its vulnerabilities. The survey for the report found that most organizations (87 percent) have no governance in place to mitigate AI risk. Two-thirds of those that were breached didn't perform regular audits to evaluate risk and more than three-quarters reported not performing adversarial testing on their AI models. This isn't the first time that security and governance have been raised as issues when it comes to corporate AI rollouts. Last year, The Register reported that many large enterprises had hit pause on integrating AI assistants and virtual agents created with Microsoft Copilot because these were pulling in information that employees shouldn't have access to. Also last year, analyst Gartner estimated that at least 30 percent of enterprise projects involving generative AI (GenAI) would be abandoned after the proof-of-concept stage by the end of 2025, due to poor data quality, inadequate risk controls, escalating costs, or unclear business value. IBM's report appears to show that many organizations are simply bypassing security and governance in favor of getting AI adoption in place, perhaps because of a fear of being left behind with all the hype surrounding the technology. "The report reveals a lack of basic access controls for AI systems, leaving highly sensitive data exposed and models vulnerable to manipulation," said IBM's VP of Security and Runtime Products, Suja Viswesan. "As AI becomes more deeply embedded across business operations, AI security must be treated as foundational. The cost of inaction isn't just financial, it's the loss of trust, transparency and control," she said, adding that "the data shows that a gap between AI adoption and oversight already exists, and threat actors are starting to exploit it." ®
[2]
Shadow AI adds $670K to breach costs while 97% of enterprises skip basic access controls, IBM reports
Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Shadow AI is the $670,000 problem most organizations don't even know they have. IBM's 2025 Cost of a Data Breach Report, released today in partnership with the Ponemon Institute, reveals that breaches involving employees' unauthorized use of AI tools cost organizations an average of $4.63 million. That's nearly 16% more than the global average of $4.44 million. The research, based on 3,470 interviews across 600 breached organizations, reflects how quickly AI adoption is outpacing security oversight. While only 13% of organizations reported AI-related security incidents, 97% of those breached lacked proper AI access controls. Another 8% weren't even sure if they'd been compromised through AI systems. "The data shows that a gap between AI adoption and oversight already exists, and threat actors are starting to exploit it," said Suja Viswesan, Vice President of Security and Runtime Products at IBM. "The report revealed a lack of basic access controls for AI systems, leaving highly sensitive data exposed and models vulnerable to manipulation." Shadow AI, supply chains are the favorite attack vectors The report finds that 60% of AI-related security incidents resulted in compromised data, while 31% caused disruptions to an organization's daily operations. Customers' personally identifiable information (PII) was compromised in 65% of shadow AI incidents. That's significantly higher than the 53% global average. One of AI security's greatest weaknesses is governance, with 63% of breached organizations either lacking AI governance policies or are still developing them. "Shadow AI is like doping in the Tour de France; people want an edge without realizing the long-term consequences," Itamar Golan, CEO of Prompt Security, told VentureBeat. His company has cataloged over 12,000 AI apps and detects 50 new ones daily. VentureBeat continues to see adversaries' tradecraft outpace current defenses against software and model supply chain attacks. It's not surprising that the report found that supply chains are the primary attack vector for AI security incidents, with 30% involving compromised apps, APIs, or plug-ins. As the report states: "Supply chain compromise was the most common cause of AI security incidents. Security incidents involving AI models and applications were varied, but one type clearly claimed the top ranking: supply chain compromise (30%), which includes compromised apps, APIs and plug-ins." Weaponized AI is proliferating Every form of weaponized AI, including LLMs designed to improve tradecraft, continues to accelerate. Sixteen percent of breaches now involve attackers using AI, primarily for AI-generated phishing (37%) and deepfake attacks (35%). Models, including FraudGPT, GhostGPT and DarkGPT, retail for as little as $75 a month and are purpose-built for attack strategies such as phishing, exploit generation, code obfuscation, vulnerability scanning and credit card validation. The more fine-tuned a given LLM is, the greater the probability it can be directed to produce harmful outputs. Cisco's The State of AI Security Report reports that fine-tuned LLMs are 22 times more likely to produce harmful outputs than base models. "Adversaries are not just using AI to automate attacks, they're using it to blend into normal network traffic, making them harder to detect," Etay Maor, Chief Security Strategist at Cato Networks, recently told VentureBeat. "The real challenge is that AI-powered attacks are not a single event; they're a continuous process of reconnaissance, evasion, and adaptation." As Shlomo Kramer, CEO of Cato Networks, warned in a recent VentureBeat interview: "There is a short window where companies can avoid being caught with fragmented architectures. The attackers are moving faster than integration teams." Governance one of the weaknesses adversaries exploit Among the 37% of organizations claiming to have AI governance policies, only 34% perform regular audits for unsanctioned AI. Just 22% conduct adversarial testing on their AI models. DevSecOps emerged as the top factor reducing breach costs, saving organizations $227,192 on average. The report's findings reflect how relegating governance as a lower priority impacts long-term security. "A majority of breached organizations (63%) either don't have an AI governance policy or are still developing one. Even when they have a policy, less than half have an approval process for AI deployments, and 62% lack proper access controls on AI systems." Most organizations lack essential governance to reduce AI-related risks, with 87% acknowledging the absence of policies or processes. Nearly two-thirds of breached companies fail to audit their AI models regularly, and over three-quarters do not conduct adversarial testing, leaving critical vulnerabilities exposed. This pattern of delayed response to known vulnerabilities extends beyond AI governance to fundamental security practices. Chris Goettl, VP Product Management for Endpoint Security at Ivanti, emphasizes the shift in perspective: "What we currently call 'patch management' should more aptly be named exposure management -- or how long is your organization willing to be exposed to a specific vulnerability?" The $1.9M AI dividend: Why smart security pays off Despite the proliferating nature of weaponized AI, the report offers hope for battling adversaries' growing tradecraft. Organizations that go all-in using AI and automation are saving $1.9 million per breach and resolving incidents 80 days faster. According to the report: "Security teams using AI and automation extensively shortened their breach times by 80 days and lowered their average breach costs by USD 1.9 million compared to organizations that didn't use these solutions." It's striking how broad the contrast is. AI-powered organizations spend $3.62 million on breaches, compared to $5.52 million for those without AI, resulting in a 52% cost differential. These teams identify breaches in 153 days, compared to 212 days for traditional approaches, and then contain them in 51 days, versus 72 days. "AI tools excel at rapidly analyzing massive data across logs, endpoints and network traffic, spotting subtle patterns early," noted Vineet Arora, CTO at WinWire. This capability transforms security economics: while the global average breach cost sits at $4.44 million, extensive AI users operate 18% below that benchmark. Yet adoption continues to struggle. Only 32% use AI security extensively, 40% deploy it in a limited manner, and 28% use it in no capacity. Mature organizations distribute AI evenly across the security lifecycle, most often following the following distribution: 30% prevention, 29% detection, 26% investigation and 27% response. Daren Goeson, SVP Product Management at Ivanti, reinforces this: "AI-powered endpoint security tools can analyze vast amounts of data to detect anomalies and predict potential threats faster and more accurately than any human analyst." Security teams aren't lagging; however, 77% match or exceed their company's overall AI adoption. Among those investing post-breach, 45% choose AI-driven solutions, with a focus on threat detection (36%), incident response planning (35%) and data security tools (31%). The DevSecOps factor amplifies benefits further, saving an additional $227,192, making it the top cost-reducing practice. Combined with AI's impact, organizations can cut breach costs by over $2 million, transforming security from a cost center to a competitive differentiator. Why U.S. cybersecurity costs hit record highs while the rest of the world saves millions The cybersecurity landscape revealed a striking paradox in 2024: as global breach costs dropped to $4.44 million, their first decline in five years. U.S. organizations watched their exposure skyrocket to an unprecedented $10.22 million per incident. This divergence signals a fundamental shift in how cyber risks are materializing across geographic boundaries. Healthcare organizations continue to bear the heaviest burden, with an average cost of $7.42 million per breach, and resolution timelines stretching to 279 days -- a full five weeks longer than what their peers in other industries experience. The operational toll proves equally severe: 86% of breached organizations report significant business disruption, with three-quarters requiring more than 100 days to restore normal operations. Perhaps most concerning for security leaders is the emergence of investment fatigue. Post-breach security spending commitments have plummeted from 63% to just 49% year-over-year, suggesting organizations are questioning the ROI of reactive security investments. Among those achieving full recovery, only 2% managed to restore their operational status within 50 days, while 26% required more than 150 days to regain operational footing. These metrics underscore a harsh reality: while global organizations are improving their ability to contain breach costs, U.S. enterprises face an escalating crisis that traditional security spending alone cannot resolve. The widening gap demands a fundamental rethinking of cyber resilience strategies, particularly for healthcare providers operating at the intersection of maximum risk and extended recovery timelines. IBM's report underscores why governance is so critical "Gen AI has lowered the barrier to entry for cybercriminals. ... Even low‑sophistication attackers can leverage GenAI to write phishing scripts, analyze vulnerabilities, and launch attacks with minimal effort," notes CrowdStrike CEO and founder George Kurtz. Mike Riemer, Field CISO at Ivanti, offers hope: "For years, attackers have been utilizing AI to their advantage. However, 2025 will mark a turning point as defenders begin to harness the full potential of AI for cybersecurity purposes." IBM's report provides insights organizations can use to act immediately: As the report concludes: "Organizations must ensure chief information security officers (CISOs), chief revenue officers (CROs) and chief compliances officers (CCOs) and their teams collaborate regularly. Investing in integrated security and governance software and processes to bring these cross-functional stakeholders together can help organizations automatically discover and govern shadow AI." As attackers weaponize AI and employees create shadow tools for productivity, the organizations that survive will embrace AI's benefits while rigorously managing its risks. In this new landscape, where machines battle machines at speeds humans can't match, governance isn't just about compliance; it's about survival.
[3]
AI cyberattacks outpace security as sector still the wild west
IBM's latest Cost of a Data Breach report finds that AI is already an easy and high value target. A new IBM report finds that artificial intelligence adoption is "greatly" outpacing AI security and governance. The company has been issuing its annual Cost of a Data Breach report for two decades now. The latest report is the first to study breaches in relation to security, governance and access controls for AI. According to its findings, AI is already an easy and high value target. However, AI also plays a leading role in cybersecurity, with IBM suggesting in 2023 that the tech had the biggest impact on the speed of breach identification and containment. Although, in the wrong hands, the tech can have drastic consequences on businesses. The latest report, conducted by Ponemon Institute, analyses data breaches experienced by 600 organisations between March 2024 and February 2025. It finds that organisations are increasingly bypassing security and governance for AI in favour of faster adoption of the tech. Globally, companies are fast in adopting AI into their business and workflow, with more than two-thirds of European organisations expected to integrate the tech by the end of this year. Of the organisations studied in this report, 13pc reported breaches of AI models or applications while 8pc of them reported not know if they had been compromised this way. 97pc of those compromised in AI breaches report not having access controls for the tech in place. As a result of the AI-related breaches, 60pc led to compromised data and 31pc led to organisational disruptions. Interestingly, the cost of data breaches saw the first decline in five years, falling to a global average of nearly $4.5m. However, the costs rose in the US, where the average data breach now costs a record of $10.2m. Last year, the global average cost of a data breach was around $4.8m - a 10pc hike from the year before. According to the report, nearly all organisations studied suffered operational disruption following a data breach, and the disruptions took more than 100 days on average to solve and recover from. Although, the global average on the time it takes to identify, contain and restore services is around 241 days. Some industries are, however, more susceptible and hard hit from data breaches. Averaging at $7.4m, healthcare breaches remained the most expensive, even as the sector saw a reduction in costs when compared to the previous year. While breaches across healthcare also took the longest to identify and contain at 279 days. Globally, organisations are pushing back on ransom demands, with around 63pc opting not to pay. The UK government has also taken a similar route, proposing to ban public sector bodies in the country from paying ransoms demanded by cybercriminals. However, as more organisations refuse to pay ransoms, the average extortion cost remains high, IBM finds, especially if they are disclosed by an attacker - at more than $5m - as opposed to being detected internally. While the organisations that do end up detecting the breach internally observed nearly $900,000 in savings. Lack of governance As organisations increasingly use AI, so do threat actors. According to the report, 16pc of the studied breaches involved attackers that used AI tools, most often for phishing or deepfake impersonation attacks. Shadow AI, or the unsanctioned use of AI tools by employees without prior approval or oversight from IT or security teams, is also causing particular issues to organisations, IBM finds. Organisations that use shadow AI reported an average of $670,000 of added cost when breached as opposed to those that used it at low levels or not at all. Moreover, security incidents involving shadow AI led to more personally identifiable information and intellectual property being compromised when compared to the global average. The Cost of a Data Breach report finds that 63pc of breached organisations either don't have an AI governance policy or are still developing one. And of the organisations that have AI governance policies in place, only 34pc perform regular audits for unsanctioned AI. Even still, IBM finds a "significant reduction" in the number of organisations that said they plan to invest in security following a breach. Moreover, less than half of those that plan to invest in security post-breach said they will focus on AI-driven security solutions or services. "The data shows that a gap between AI adoption and oversight already exists and threat actors are starting to exploit it," said Suja Viswesan, the vice president of security and runtime products at IBM. "The report revealed a lack of basic access controls for AI systems, leaving highly sensitive data exposed, and models vulnerable to manipulation. As AI becomes more deeply embedded across business operations, AI security must be treated as foundational. The cost of inaction isn't just financial, it's the loss of trust, transparency and control." Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
Share
Share
Copy Link
IBM's Cost of a Data Breach Report 2025 reveals alarming gaps in AI security and governance, with 97% of breached organizations lacking proper AI access controls. Shadow AI and supply chain vulnerabilities emerge as key threats, while AI-related breaches add significant costs to organizations.
IBM's Cost of a Data Breach Report 2025 has revealed a concerning trend in the enterprise world: organizations are rapidly adopting AI technologies while neglecting crucial security and governance measures
1
. This rush to implement AI without proper safeguards has caught the attention of attackers, who are already exploiting these vulnerabilities.Source: The Register
The report, based on data from 600 organizations globally between March 2024 and February 2025, highlights several alarming statistics:
1
.1
2
.2
.2
.Shadow AI has emerged as a significant security risk. Organizations reported that security incidents involving shadow AI led to higher rates of compromised personally identifiable information (PII) and intellectual property compared to the global average
2
3
. The lack of oversight and governance for these unofficial AI tools creates an increased risk of exploitation by attackers.Supply chain compromise was identified as the most common cause of AI-related breaches, accounting for 30% of incidents
1
2
. This category includes compromised apps, APIs, and plug-ins, with the majority of intrusions originating from third-party vendors providing software as a service (SaaS).Source: VentureBeat
The report reveals a significant lack of governance in mitigating AI risks:
1
.2
3
.1
.1
.While the global average cost of a data breach saw a slight decline to $4.5 million, AI-related breaches and shadow AI use significantly increased costs
3
. In the United States, the average data breach cost reached a record high of $10.2 million3
.Related Stories
Interestingly, the report also highlights the potential benefits of AI in cybersecurity. Organizations using AI and automation extensively shortened their breach response times by 80 days and lowered average breach costs by $1.5 million
2
. However, attackers are also leveraging AI, with 16% of breaches involving AI-powered attacks, primarily for phishing and deepfake impersonation3
.The healthcare sector remains the most vulnerable, with an average breach cost of $7.4 million and the longest time to identify and contain breaches at 279 days
3
. This underscores the critical need for improved security measures in sensitive industries.Source: Silicon Republic
As AI becomes more deeply embedded in business operations, experts emphasize the need for a fundamental shift in approach. Suja Viswesan, VP of Security and Runtime Products at IBM, warns that "the cost of inaction isn't just financial, it's the loss of trust, transparency and control"
1
. Organizations must prioritize AI security and governance to protect sensitive data and maintain stakeholder confidence in an increasingly AI-driven business landscape.Summarized by
Navi
[1]
[2]
[3]
1
Business and Economy
2
Business and Economy
3
Policy and Regulation