2 Sources
2 Sources
[1]
Businesses in 2026: AI security oh yeah better look at that
Survey finds security checks nearly doubled in a year as leaders wise up The number of organizations that have implemented methods for identifying security risks in the AI tools they use has almost doubled in the space of a year. Nearly two-thirds (64 percent) of all business leaders who participated in the World Economic Forum's (WEF) Global Cybersecurity Outlook 2026 said that they assessed AI tools' security risks before deploying them. The finding represents a steep rise compared with last year's 37 percent figure, and underlines how much of a priority AI security has become for organizations worldwide. Nearly all respondents (94 percent) said that AI will be the most significant driver of cybersecurity change in 2026, and 87 percent believe that the associated vulnerabilities have increased - more than any other type of threat. It's true that The Reg was busy last year covering AI vulnerabilities. Prompt injections were the main culprits - there were lots of them - while AI code assistants were seen making expert devs worse, and in December, Google was called in to fix the security issues created by Gemini. The WEF's findings, published a week before its annual Davos meeting, offer a more positive view on the state of AI security across the world than the show of hands suggested at the NCSC's annual conference in May. In a room full of roughly 200 security professionals, not a single one could claim that they had a strong grasp of the security of their organization's AI systems. For leaders, the most common fear concerning AI right now is data leaks, the WEF survey noted. Coming in just behind is the advancement of adversarial capabilities, which makes sense given that the report also found that geopolitically motivated attacks were the most common feature of leaders' risk strategies. Sixty-four percent of organizations reported that geopolitical matters played the biggest role in shaping their cyber risk strategies, topping the list for consecutive years. Geopolitics was far more of a concern for larger organizations, those with more than 100,000 employees, with 91 percent reporting that their security plans changed as a result, compared to just 59 percent for those with fewer than 1,000 staffers. Gartner reached similar conclusions after surveying European CIOs and other IT leaders in 2025, finding that many were considering opting for a local cloud provider as data sovereignty fears escalate. Geopolitics most commonly influences cybersecurity and cybercrime when it comes to the conflicts between major adversaries. It is not uncommon for UK or US organizations to be pelted with DDoS attacks from Russian cyber troublemakers, for example. Russia also has a history of targeting major sporting events, so organizations in the US may be preparing for politically-motivated cyberattacks later this year, as the world's eyes will be on the FIFA World Cup this summer. For CEOs, however, the threat from hacktivists is not even on their radars. Cyber-enabled fraud, such as phishing and social engineering, is the number-one concern, followed by AI vulnerabilities and exploits of software flaws. Ransomware was the chief worry of 2025, and supply chain disruptions were third on the list last year, but both dropped out of the top three in 2026. Ransomware remains the prime fear for CISOs, though. Both ransomware and supply chain attacks remain at ranks one and two, respectively, in security chiefs' lists of nightmares. The key to preventing the worst outcomes is for all organizations to pursue a heightened state of cyber resilience. "Cyber resilience" is a phrase that's repeated time and again by national security authorities for a good reason. It refers to an organization's ability to minimize the impact of a cyberattack, should one penetrate its systems. The majority of respondents to the WEF's survey (64 percent) claimed that they met the minimum requirements for cyber resilience, while only 19 percent believed that they are exceeding those baseline standards. Major attacks, such as those on JLR and M&S, high-profile events that led to extensive and costly periods of downtime for both businesses, illustrate the issues with minimizing cyberattacks that organizations continue to face. ®
[2]
Businesses are finally taking action to crack down on AI security risks
The World Economic Forum (WEF) has uncovered a positive trend in the world of AI - with companies finally taking action to address the security risks of AI, as nearly two in three (64%) are now assessing the risks before deploying tools (up from 37% last year). When it comes to their cybersecurity strategies as a whole, almost all (94%) agree that AI tools will be the biggest driver of change in 2026. This comes from the 2026 version of the Global Cybersecurity Outlook, published in collaboration with Accenture. The reported changes in attitude are likely prompted by the fact that 87% believe that AI-related vulnerabilities have increased. Data leaks (34%) are CEOs' biggest concerns, technical security of AI systems saw the biggest increase (13% in 2026 vs 5% in 2025), and the advancement of adversarial capabilities saw the biggest drop (29% in 2026 vs 47% in 2025) despite being the second-biggest concern. Today, around two-thirds (64%) of organizations factor in geopolitically motivated attacks, and many are moving towards sovereign cloud options. Still, there are differences in how the C-suite perceive AI threats. CEOs now quote fraud and AI vulnerabilities as their biggest concerns, but CISOs are most concerned about ransomware and supply chain disruptions. Both leader types noted software vulnerability exploitations as their third-highest concern. Despite widespread agreement that AI-enabled threats have risen, companies are still turning to AI to respond. Three-quarters (77%) now use AI for cybersecurity, with the most common applications being phishing detection (52%), intrusion detection (46%), and automating security operations (43%). On the flip side, a lack of skills (54%), the need for human validation (41%), and uncertainty about risks (39%) are the key barriers to using AI in cybersecurity. Looking ahead, the WEF sees highly convincing phishing, deepfake scams, and automated social engineering becoming the biggest AI-enabled threats. But although AI might be accelerating them, the most common attack method remains phishing - something that hasn't changed at its core for a long time.
Share
Share
Copy Link
Organizations are finally prioritizing AI security, with 64% now assessing AI security risks before deploying tools—a dramatic jump from just 37% last year. The World Economic Forum's latest cybersecurity outlook reveals that 94% of business leaders see AI as the most significant driver of cybersecurity change in 2026, while 87% report increased AI vulnerabilities.
The landscape of AI security has shifted dramatically as organizations worldwide recognize the urgency of protecting their AI deployments. According to the World Economic Forum Global Cybersecurity Outlook 2026, published in collaboration with Accenture ahead of the annual Davos meeting, nearly two-thirds (64%) of business leaders now assess AI security risks before deploying tools
1
. This figure represents a steep rise from last year's 37%, underscoring how rapidly assessing risks before AI tool deployment has become a priority2
.
Source: TechRadar
The urgency stems from widespread recognition that AI vulnerabilities pose mounting threats. An overwhelming 94% of respondents identified AI as primary driver of cybersecurity change in 2026, while 87% believe that increased AI vulnerabilities have escalated beyond any other type of threat
1
. This marks a stark contrast to May 2025, when not a single security professional at the NCSC's annual conference could claim a strong grasp of their organization's AI system security.For CEOs, data leaks have emerged as the most pressing concern, with 34% citing them as their top AI-related worry
2
. The technical security of AI systems saw the biggest increase in concern, jumping from 5% in 2025 to 13% in 2026. Meanwhile, phishing and social engineering remain the number-one concern for chief executives when considering cyber-enabled fraud, followed closely by AI vulnerabilities and software vulnerability exploitations1
.Interestingly, CISOs maintain different priorities than their CEO counterparts. Ransomware remains the prime fear for security chiefs, with supply chain disruptions holding the second position on their nightmare lists
1
. Both ransomware and supply chain attacks dropped out of CEOs' top three concerns in 2026, despite ranking first and third respectively in 2025. This divergence highlights the varying perspectives within the C-suite regarding immediate versus systemic threats.Geopolitics has emerged as a dominant force shaping organizational cyber risk strategies, with 64% of organizations reporting that geopolitically motivated attacks play the biggest role in their security planning—topping the list for consecutive years
1
. The impact varies significantly by organization size: 91% of companies with more than 100,000 employees adjust their security plans due to geopolitical factors, compared to just 59% of those with fewer than 1,000 staffers.This heightened awareness of data sovereignty concerns has prompted many European organizations to consider local cloud providers, according to Gartner surveys of CIOs and IT leaders
1
. With major events like the FIFA World Cup approaching, US organizations are preparing for politically-motivated cyberattacks, particularly given Russia's history of targeting major sporting events.Related Stories
Despite concerns about AI vulnerabilities, three-quarters (77%) of organizations now deploy AI for cybersecurity purposes
2
. The most common applications include phishing detection (52%), intrusion detection (46%), and automating security operations (43%). However, significant barriers remain: skill gaps affect 54% of organizations, the need for human validation concerns 41%, and uncertainty about risks troubles 39%.
Source: The Register
Looking ahead, the WEF anticipates that highly convincing phishing, deepfake scams, and automated social engineering will become the biggest AI-enabled threats
2
. Prompt injections were particularly prevalent throughout last year, while AI code assistants were observed making expert developers less effective.While 64% of respondents claim they meet minimum requirements for cyber resilience, only 19% believe they exceed baseline standards
1
. Major attacks on companies like JLR and M&S, which resulted in extensive and costly downtime, illustrate the ongoing challenges organizations face in minimizing cyberattack impacts. The emphasis on cyber resilience—an organization's ability to minimize damage when systems are penetrated—continues to be reinforced by national security authorities as the key to preventing worst-case scenarios.Summarized by
Navi
[1]
1
Technology

2
Policy and Regulation

3
Technology
