3 Sources
3 Sources
[1]
Misconfigured AI could shut down a G20 nation, says Gartner
Rapid rollout into cyber-physical systems raises outage risk, Gartner warns The next blackout to plunge a G20 nation into chaos might not come courtesy of cybercriminals or bad weather, but from an AI system tripping over its own shoelaces. Analyst firm Gartner warned this week that misconfigured artificial intelligence embedded in national infrastructure could shut down critical services in a major economy as soon as 2028, delivering the kind of disruption usually blamed on hostile governments or catastrophic natural events. The prediction centers on the rapid adoption of AI in cyber-physical systems, which Gartner defines as "systems that orchestrate sensing, computation, control, networking, and analytics to interact with the physical world (including humans)." Gartner's warning isn't really about attackers taking over AI tools - it's about what happens when everything is working as intended... until it isn't. More operators are allowing machine learning systems to make real-time decisions, and those systems can respond unpredictably if a setting is changed, an update is pushed, or flawed data is entered. Unlike traditional software bugs that might crash a server or scramble a database, errors in AI-driven control systems can spill into the physical world, triggering equipment failures, forcing shutdowns, or destabilizing entire supply chains, Gartner warns. "The next great infrastructure failure may not be caused by hackers or natural disasters but rather by a well-intentioned engineer, a flawed update script, or a misplaced decimal," cautioned Wam Voster, VP Analyst at Gartner. Power grids are an obvious stress test. Energy firms now rely heavily on AI to monitor supply, demand, and renewable generation. If the software malfunctions or misreads data, sections of the network could go dark, and repairing damaged grid hardware is rarely a quick process. The same creeping automation is turning up in factories, transport systems, and robotics, where AI is slowly taking over decisions that used to involve a human looking mildly concerned at a dashboard. Gartner's bigger worry is how quickly AI is being deployed where mistakes don't just crash software; they break real things. AI is turning up in systems where failures can shut down physical infrastructure, yet the models themselves aren't always fully understood, even by the teams building them. That makes it difficult to predict how they'll react when something unexpected happens or when routine updates are released. "Modern AI models are so complex they often resemble black boxes," said Voster. "Even developers cannot always predict how small configuration changes will impact the emergent behavior of the model. The more opaque these systems become, the greater the risk posed by misconfiguration. Hence, it is even more important that humans can intervene when needed." While regulators have spent years focusing on cybersecurity threats to operational technology, Gartner's forecast suggests the next wave of infrastructure risk could be self-inflicted rather than adversary-driven. ®
[2]
Gartner Predicts Misconfigured AI Could Shut Down Critical Infrastructure in a G20 Nation by 2028
Implement Safe Override Modes: For all critical infrastructure CPS, include a secure "kill-switch" or other override mechanisms accessible only to authorized operators, so humans retain ultimate control even during full autonomy. Digital Twins: Develop a full-scale digital twin of the systems supporting critical infrastructure for realistic testing of updates and changes to configurations before deployment. Real-Time Monitoring: Mandate real-time monitoring with rollback mechanisms for changes made to AI in CPS, while also ensuring the creation of national AI incident response teams.
[3]
Gartner Predicts Misconfigured AI May Shut Down Critical Infra in a G20 Nation
The report says such an eventuality could happen with an involuntary coding error in the AI used to load balance a power grid for example. The next great infrastructure failure may not be caused by hackers or natural disasters but rather by a well-intentioned engineer, a flawed update script, or a misplaced decimal. Business insights company Gartner predicts that such an occurrence happening over the next two years could be caused by misconfigured AI within the cyber physical systems (CPS). "The next great infrastructure failure may not be caused by hackers or natural disasters but rather by a well-intentioned engineer, a flawed update script, or a misplaced decimal," says Wam Voster, VP Analyst at Gartner. In case you're wondering what a CPS is, it is nothing but the engineered systems in the backdrop that orchestrates sensing, computation, control, networking and analytics to interact with the physical world, which includes other systems as well as humans. Per Gartner CPS encompasses operational tech, industrial control systems, automation and control systems as well as industrial internet of things, robots, drones etc. Voster says, "Modern AI models are so complex they often resemble 'black boxes. Even developers cannot always predict how small configuration changes will impact the emergent behaviour of the model." The more opaque these systems become, the greater the risk posed by misconfiguration. Hence, it is even more important that humans can intervene when needed, he says while highlighting why the next great infrastructure failure may not be caused by hackers or natural disasters but by something associated to AI. "It could be "a well-intentioned engineer, a flawed update script, or a misplaced decimal" Voster says a secure 'kill-switch' or override mode accessible only to authorised operators is essential for safeguarding national infrastructure from unintended shutdowns caused by an AI misconfiguration. Gartner says a misconfigured AI could autonomously shut down vital services, misinterpret sensor data or trigger unsafe actions. This could cause physical damage or large scale disruption of services that could result in a threat to public safety and economic stability by compromising control of systems like power grids or manufacturing units. Voster notes that modern power networks rely heavily on AI for real-time load-balancing of generation and consumption. A predictive model that got misconfigured by mistake might end up interpreting demand as instability. This could trigger grid isolation or load shedding across large parts of a city or an entire region or even a country. "Modern AI models are so complex they often resemble 'black boxes that even developers cannot always predict how small configuration changes will impact the emergent behaviour of the model. The more opaque these systems become, the greater the risk posed by misconfiguration. Hence, it is even more important that humans can intervene when needed.," Voster says. The Gartner report recommends risk mitigation efforts to be undertaken by the CISOs that include implementing safe override modes, creating digital twins and ensuring real-time monitoring of functions. The first of these involves including a "kill switch" or other override mechanisms for all critical infrastructure CPS. This should be accessible to authorized personnel so that humans ultimately are in control in spite of full AI autonomy. Creating a digital twin of the system supporting such critical infrastructure is another way to avoid disaster, the note says adding that real-time monitoring with rollback mechanisms for changes made to the AI should be made mandatory for such systems.
Share
Share
Copy Link
Gartner predicts that misconfigured AI embedded in national infrastructure could shut down critical services in a major economy as soon as 2028. The warning centers on rapid AI adoption in cyber-physical systems controlling power grids, factories, and transport networks. Unlike traditional software bugs, errors in AI-driven control systems can spill into the physical world, triggering equipment failures and destabilizing entire supply chains.
The next major blackout affecting a G20 nation may not stem from cybersecurity threats or natural disasters, but from misconfigured AI systems operating within national infrastructure. Gartner issued a stark warning this week predicting that artificial intelligence systems embedded in critical infrastructure could autonomously shut down vital services in a major economy by 2028
1
. The forecast highlights a growing vulnerability as nations rapidly deploy AI in cyber-physical systems that control everything from power grids to manufacturing facilities.
Source: DT
Unlike conventional software failures that might crash servers or corrupt databases, errors in AI-driven control systems can cascade into the physical world. These systems orchestrate sensing, computation, control, networking, and analytics to interact with physical infrastructure and humans
3
. When they malfunction, the consequences extend far beyond digital disruption, potentially triggering equipment failures, forcing widespread shutdowns, or destabilizing entire supply chains."The next great infrastructure failure may not be caused by hackers or natural disasters but rather by a well-intentioned engineer, a flawed update script, or a misplaced decimal," cautioned Wam Voster, VP Analyst at Gartner
1
. Power grids represent a particularly vulnerable stress test case. Energy firms now rely heavily on AI to monitor supply, demand, and renewable generation in real-time. Modern power networks use AI for load-balancing generation and consumption, but a predictive model that gets misconfigured by mistake might interpret demand fluctuations as instability, triggering grid isolation or load shedding across cities, regions, or even entire countries3
.
Source: CXOToday
The complexity of modern AI models compounds the risk. "Modern AI models are so complex they often resemble black boxes," said Voster. "Even developers cannot always predict how small configuration changes will impact the emergent behavior of the model"
1
. This opacity means that routine updates, setting changes, or flawed data inputs can produce unpredictable AI responses that teams struggle to anticipate or quickly diagnose.Gartner's warning isn't about adversaries hijacking AI tools—it's about what happens when everything appears to be working as intended until it suddenly isn't. More operators are allowing machine learning systems to make real-time decisions in factories, transport systems, and robotics, where AI is gradually replacing decisions that previously required human judgment
1
. The same creeping automation is embedded in operational technology, industrial control systems, automation and control systems, industrial internet of things, robots, and drones3
.The central concern is deployment speed outpacing comprehension. AI is being integrated into systems where failures don't just crash software—they break physical equipment and threaten public safety and economic stability. Repairing damaged grid hardware or restoring compromised manufacturing units is rarely a quick process, and the potential for critical infrastructure shutdowns creates risks that regulators have only begun to address.
Related Stories
Voster emphasized that as these systems become more opaque, human intervention becomes even more critical. Gartner recommends several risk mitigation strategies for organizations deploying AI in critical infrastructure. First, implement safe override modes: all critical infrastructure cyber-physical systems should include a secure kill-switch or override mechanisms accessible only to authorized operators, ensuring humans retain ultimate control even during full autonomy
2
3
.Second, develop digital twins for testing: creating full-scale digital replicas of systems supporting critical infrastructure enables realistic testing of updates and configuration changes before deployment
2
. Third, mandate real-time monitoring with rollback mechanisms for changes made to AI in cyber-physical systems, while establishing national AI incident response teams capable of coordinating rapid responses to infrastructure failures2
3
.While regulators have spent years focusing on protecting operational technology from external cybersecurity threats, Gartner's forecast suggests the next wave of infrastructure failure could be self-inflicted rather than adversary-driven
1
. A misconfigured AI could autonomously shut down vital services, misinterpret sensor data, or trigger unsafe actions, causing physical damage or large-scale disruption that threatens both public safety and economic stability3
.The prediction carries significant implications for how nations approach AI governance and infrastructure resilience. Organizations deploying AI in critical systems face pressure to balance innovation speed with safety protocols. The timeline—as soon as 2028—suggests urgency in establishing testing frameworks, human oversight mechanisms, and incident response capabilities before automation reaches a point where small errors produce catastrophic consequences. Watch for increased regulatory scrutiny of AI deployments in essential services and growing demand for transparency in how these black box systems make decisions affecting millions of people.
Summarized by
Navi
[1]
[2]
20 Feb 2026•Technology

27 Feb 2026•Technology

08 Mar 2026•Technology

1
Policy and Regulation

2
Technology

3
Policy and Regulation
