AI & Critical Infrastructure: Gartner Warns of Potential Shutdowns by 2028

by Rachel Kim – Technology Editor

A misconfiguration in artificial intelligence controlling critical infrastructure could shut down a G20 nation by 2028, according to a report released this week by Gartner, Inc.

The warning, issued February 12, 2026, centers on the increasing integration of AI into what Gartner terms Cyber-Physical Systems (CPS). These systems, encompassing operational technology, industrial control systems, and the Industrial Internet of Things, are designed to interact with the physical world. Gartner defines CPS as “engineered systems that orchestrate sensing, computation, control, networking and analytics to interact with the physical world (including humans).”

The potential for disruption isn’t rooted in malicious attacks, but rather in unintended consequences arising from errors in AI configuration or updates, Gartner analysts say. As operators increasingly allow machine learning systems to make real-time decisions, even minor alterations – a changed setting, a flawed update script, or inaccurate data input – can trigger unpredictable responses with significant real-world impact.

“The next great infrastructure failure may not be caused by hackers or natural disasters but rather by a well-intentioned engineer, a flawed update script, or a misplaced decimal,” cautioned Wam Voster, VP Analyst at Gartner. Unlike traditional software failures that might disrupt data, errors in AI-driven control systems can lead to equipment failures, shutdowns, or supply chain instability.

The rapid adoption of AI in these systems is the core concern. Gartner highlighted in a report last August that AI is “the single most important area of investment and innovation for market leaders in cyber-physical systems security.” However, the speed of implementation is outpacing the development of safeguards against unintended consequences. The firm predicts that by 2026, emergent critical risks of AI in CPS security will become increasingly apparent.

The issue isn’t necessarily about AI “hallucinations” – generating incorrect information – but about a lack of nuanced understanding. AI systems may fail to recognize subtle changes that a human operator with experience would immediately detect, leading to cascading failures in complex systems.

Gartner’s report follows a similar warning issued in 2025, emphasizing the demand for increased skill development in CPS security to address the growing role of AI. The firm has not yet released details on specific mitigation strategies or recommended policy changes in response to the latest findings.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.