A misconfigured artificial intelligence system could potentially disable critical infrastructure in a G20 nation by 2028, according to a recent report from Gartner. The warning comes as organizations increasingly rely on AI to manage complex industrial control systems.
Gartner uses the term Cyber Physical Systems (CPS) to describe these technologies, defining them as “engineered systems that orchestrate sensing, computation, control, networking and analytics to interact with the physical world.” This broad definition encompasses operational technology (OT), industrial control systems (ICS), industrial automation and control systems (IACS), the Industrial Internet of Things (IIoT), robots, drones and Industry 4.0 initiatives.
The core concern, Gartner analysts state, isn’t necessarily AI systems generating inaccurate outputs – often referred to as “hallucinations” – but rather their inability to recognize subtle anomalies that a human operator with experience would readily identify. Within critical infrastructure, even minor errors can escalate rapidly into significant incidents.
The potential for disruption extends to a range of essential services. Gartner’s assessment suggests vulnerabilities across sectors reliant on these interconnected systems, including power grids, water treatment facilities, and transportation networks. The report highlights the increasing automation of industrial controls as a key driver of this risk.
According to Gartner, the issue stems from the speed at which industrial controls are being transitioned to autonomous agents. This rapid shift, while offering potential efficiency gains, introduces modern vulnerabilities if AI systems are not properly configured and monitored. The firm warns that a lack of adequate safeguards could leave critical infrastructure susceptible to disruption.
The warning from Gartner follows increasing scrutiny of AI safety and reliability. While the focus has often been on consumer-facing applications, the potential impact of AI failures in industrial settings presents a distinct and potentially more severe threat. The firm’s report underscores the need for organizations to prioritize robust testing, validation, and monitoring of AI systems deployed in critical infrastructure environments.