AI Cyberattacks: Is ‘Vibe Hacking’ a Real Threat?
Experts Say Fully Autonomous AI Exploits Remain Distant
The rapid advancement of artificial intelligence, particularly large language models (LLMs), has already seen its integration into malicious cyber activities. From AI-powered phishing scams to deepfakes, threat actors are leveraging these tools. The question now is how close we are to a future where AI can autonomously discover and exploit vulnerabilities—a concept dubbed “vibe hacking.”
The Reality of AI in Cybersecurity
Contrary to some popular notions, “vibe coding”—where AI generates code based on desired outcomes—still requires significant human direction and expertise. While LLMs are becoming more efficient at producing code, their application in sophisticated cyberattacks is still in its infancy. Michele Campobasso, a senior security researcher at Forescout, notes that there is “no clear evidence of real threat actors” fully weaponizing AI for autonomous exploit generation.
Campobasso’s team conducted a study between February and April 2025, testing over 50 AI models against industry-standard cybersecurity challenges. The findings indicate that while AI is being used for tasks like phishing and generating basic malware components, its ability to create complex, functional exploits is limited.
AI’s Current Limitations in Exploitation
The research revealed significant drawbacks across various AI models:
- Open-source LLMs proved inadequate for even basic vulnerability research.
- Underground LLMs showed marginal improvements but suffered from usability issues, including access restrictions and unstable outputs.
- Commercial models performed best but still struggled, with only a few successfully generating exploits for the most challenging test cases.
Exploit development proved more difficult for AI than vulnerability research, with no model completing all assigned tasks. Campobasso observed:
“Attackers still cannot rely on one tool to cover the full exploitation pipeline. LLMs produced inconsistent results, with high failure rates. Even when models completed exploit development tasks, they required substantial user guidance.”
—Michele Campobasso, Senior Security Researcher
The analysis concluded that we are “still far from LLMs that can autonomously generate fully functional exploits.” Inexperienced attackers may also be misled by the confident, yet often incorrect, output of these models.
Preparing for the Future of AI-Driven Threats
While fully autonomous AI hacking is not an immediate threat, the trend suggests it is an inevitable development. Defenders should prepare by reinforcing fundamental cybersecurity practices. As Campobasso advises, “The fundamentals of cybersecurity remain unchanged: An AI-generated exploit is still just an exploit, and it can be detected, blocked, or mitigated by patching.”
For instance, in 2023, the cybersecurity firm Darktrace reported that their AI system detected and thwarted an attempted ransomware attack by identifying and isolating an unusual network activity pattern, even before human analysts were alerted. This highlights the ongoing importance of robust security measures against evolving threats, whether AI-powered or not.