A vulnerability in the artificial intelligence systems powering self-driving cars could allow attackers to silently hijack vehicle controls, Georgia Tech researchers have discovered. The flaw, dubbed “VillainNet,” remains dormant until triggered by specific conditions, after which it grants attackers near-certain control of the targeted vehicle.
The research, presented at the ACM Conference on Computer and Communications Security (CCS) in October 2025, reveals that attackers could program almost any action within a self-driving vehicle’s AI network to activate the backdoor. Researchers posited a scenario where the vulnerability could be triggered when a self-driving taxi’s AI responds to rainfall and changing road conditions, potentially allowing a hacker to take control and threaten passengers.
“Super networks are designed to be the Swiss Army knife of AI, swapping out tools, or in this case sub networks, as needed for the task at hand,” explained David Oygenblik, a PhD student at Georgia Tech and lead researcher on the project. “However, we found that an adversary can exploit this by attacking just one of those tiny tools. The attack remains completely dormant until that specific subnetwork is used, effectively hiding across billions of other benign configurations.”
According to Oygenblik, the attack is nearly guaranteed to succeed and is exceptionally difficult to detect with current security tools. VillainNet can be concealed at any stage of development and across a vast range of scenarios. “With VillainNet, the attacker forces defenders to find a single needle in a haystack that can be as large as 10 quintillion straws,” he stated.
The researchers found that detecting a VillainNet backdoor would require 66 times more computing power and time than current verification methods allow, making a comprehensive security sweep impractical. Experiments demonstrated a 99% success rate for the attack when activated, while remaining undetected throughout the AI system.
The AutoRally platform, developed at Georgia Tech by Brian Goldfain and Paul Drews, under the guidance of James Rehg, is a high-performance testbed for self-driving vehicle research. While not directly related to the VillainNet discovery, AutoRally provides a platform for researchers to explore autonomous driving systems and their vulnerabilities. The platform is designed as a self-contained system, requiring no external sensing or computing.
The Georgia Tech research comes as high-performance computing (HPC) and artificial intelligence (AI) are increasingly impacting daily life, including advancements in areas like nuclear fusion and safer building design, according to a January 2026 report from the university. Qi Tang, an assistant professor at Georgia Tech’s School of Computational Science and Engineering, noted the growing role of advanced computing and AI in making nuclear fusion a viable clean energy source.
The researchers propose adding security measures to the AI super networks as a potential solution. These networks utilize billions of specialized subnetworks activated on demand, but the study highlights the risk of attacking a single subnetwork tool. The team’s function is a call for the security community to develop new defenses against these novel, hyper-targeted threats as AI systems become more complex.