2 Sources
2 Sources
[1]
Misconfigured AI could shut down a G20 nation, says Gartner
Rapid rollout into cyber-physical systems raises outage risk, Gartner warns The next blackout to plunge a G20 nation into chaos might not come courtesy of cybercriminals or bad weather, but from an AI system tripping over its own shoelaces. Analyst firm Gartner warned this week that misconfigured artificial intelligence embedded in national infrastructure could shut down critical services in a major economy as soon as 2028, delivering the kind of disruption usually blamed on hostile governments or catastrophic natural events. The prediction centers on the rapid adoption of AI in cyber-physical systems, which Gartner defines as "systems that orchestrate sensing, computation, control, networking, and analytics to interact with the physical world (including humans)." Gartner's warning isn't really about attackers taking over AI tools - it's about what happens when everything is working as intended... until it isn't. More operators are allowing machine learning systems to make real-time decisions, and those systems can respond unpredictably if a setting is changed, an update is pushed, or flawed data is entered. Unlike traditional software bugs that might crash a server or scramble a database, errors in AI-driven control systems can spill into the physical world, triggering equipment failures, forcing shutdowns, or destabilizing entire supply chains, Gartner warns. "The next great infrastructure failure may not be caused by hackers or natural disasters but rather by a well-intentioned engineer, a flawed update script, or a misplaced decimal," cautioned Wam Voster, VP Analyst at Gartner. Power grids are an obvious stress test. Energy firms now rely heavily on AI to monitor supply, demand, and renewable generation. If the software malfunctions or misreads data, sections of the network could go dark, and repairing damaged grid hardware is rarely a quick process. The same creeping automation is turning up in factories, transport systems, and robotics, where AI is slowly taking over decisions that used to involve a human looking mildly concerned at a dashboard. Gartner's bigger worry is how quickly AI is being deployed where mistakes don't just crash software; they break real things. AI is turning up in systems where failures can shut down physical infrastructure, yet the models themselves aren't always fully understood, even by the teams building them. That makes it difficult to predict how they'll react when something unexpected happens or when routine updates are released. "Modern AI models are so complex they often resemble black boxes," said Voster. "Even developers cannot always predict how small configuration changes will impact the emergent behavior of the model. The more opaque these systems become, the greater the risk posed by misconfiguration. Hence, it is even more important that humans can intervene when needed." While regulators have spent years focusing on cybersecurity threats to operational technology, Gartner's forecast suggests the next wave of infrastructure risk could be self-inflicted rather than adversary-driven. ®
[2]
Gartner Predicts Misconfigured AI Could Shut Down Critical Infrastructure in a G20 Nation by 2028
Implement Safe Override Modes: For all critical infrastructure CPS, include a secure "kill-switch" or other override mechanisms accessible only to authorized operators, so humans retain ultimate control even during full autonomy. Digital Twins: Develop a full-scale digital twin of the systems supporting critical infrastructure for realistic testing of updates and changes to configurations before deployment. Real-Time Monitoring: Mandate real-time monitoring with rollback mechanisms for changes made to AI in CPS, while also ensuring the creation of national AI incident response teams.
Share
Share
Copy Link
Gartner predicts that misconfigured AI embedded in national infrastructure could shut down critical services in a major economy as soon as 2028. The warning focuses on rapid AI adoption in cyber-physical systems controlling power grids, transport systems, and supply chains, where unpredictable AI responses from configuration errors could trigger widespread physical disruptions rather than just software failures.
The next major infrastructure failure in a G20 nation may not stem from cybersecurity threats or natural disasters, but from misconfigured AI systems making decisions in real-time. Analyst firm Gartner issued a stark warning this week that improperly configured artificial intelligence systems embedded in national infrastructure could shut down critical services in a major economy as soon as 2028
1
. The prediction centers on the accelerating deployment of AI systems in critical infrastructure, particularly in cyber-physical systems that orchestrate sensing, computation, control, networking, and analytics to interact with the physical world1
.
Source: DT
Unlike traditional software bugs that might crash servers or corrupt databases, errors in AI-driven control systems can cascade into the physical world, triggering equipment failures, forcing system shutdowns, or destabilizing entire supply chains. Wam Voster, VP Analyst at Gartner, cautioned that "the next great infrastructure failure may not be caused by hackers or natural disasters but rather by a well-intentioned engineer, a flawed update script, or a misplaced decimal"
1
. This shift represents a fundamental change in risk profiles, where the threat comes not from adversaries but from the systems themselves.Power grids represent one of the most vulnerable targets for AI-related disruptions. Energy firms now rely heavily on machine learning systems to monitor supply, demand, and renewable generation in real-time. If these systems malfunction or misread data, sections of the network could go dark, and repairing damaged grid hardware is rarely a quick process
1
. The same creeping automation is appearing in factories, transport systems, and robotics, where AI is gradually assuming decisions that previously required human oversight.
Source: The Register
The challenge intensifies because modern AI models often function as black box AI systems, where even developers cannot always predict how small configuration changes will impact emergent behavior
1
. Voster emphasized that "the more opaque these systems become, the greater the risk posed by misconfiguration. Hence, it is even more important that humans can intervene when needed"1
. This opacity creates scenarios where unpredictable AI responses to routine updates, setting changes, or flawed data inputs could trigger significant outages in a G20 nation.Related Stories
To address these emerging risks, experts recommend implementing human control override mechanisms across all critical infrastructure applications. These should include secure "kill-switch" capabilities or other override mechanisms accessible only to authorized operators, ensuring human oversight remains paramount even during full autonomy
2
. Digital twins for rigorous testing have emerged as another essential safeguard, allowing organizations to develop full-scale digital replicas of systems supporting critical infrastructure for realistic testing of updates and configuration changes before deployment2
.Real-time monitoring with rollback capabilities represents a third pillar of defense. Experts mandate real-time monitoring with rollback mechanisms for changes made to AI in cyber-physical systems, while also calling for the creation of AI incident response teams at the national level
2
. While regulators have spent years focusing on external cybersecurity threats to operational technology, Gartner's forecast suggests the next wave of infrastructure failure could be self-inflicted rather than adversary-driven, requiring a fundamental shift in how organizations approach AI deployment in systems where mistakes don't just crash software but break real things1
.Summarized by
Navi
[1]
22 Sept 2025•Technology

20 Jul 2024

02 Jan 2026•Technology

1
Technology

2
Business and Economy

3
Science and Research
