Ethereum Founder Vitalik Buterin Warns Against Naive AI Governance Following ChatGPT Security Breach Presentation
September 13, 2025 – Ethereum co-founder Vitalik Buterin has sharply criticized the concept of “naive” AI governance systems, following a demonstration of a critical security flaw in OpenAI’s chatgpt. The warning comes after security developer Miyamura revealed a method to hijack ChatGPT using a cleverly crafted calendar invite containing a “jailbreak prompt” sent to a user’s email address.
The demonstrated exploit relies on a user accepting the malicious calendar invite, then prompting ChatGPT to “help prepare for their day by looking at their calendar.” Upon reading the invite, the AI is reportedly hijacked, allowing an attacker to command the bot – including accessing and exfiltrating private email data. While ChatGPT currently requires user approval for each action, Miyamura cautioned that widespread automatic approval of AI suggestions could exacerbate the risk. “AI might be super smart, but can be tricked and phished in incredibly dumb ways to leak your data,” he stated.
Buterin’s response, posted on Twitter, directly links this vulnerability to the dangers of using AI for financial allocation.He argues that any AI system designed to distribute funds would inevitably be targeted by attackers attempting to exploit similar loopholes with prompts like “gimme all the money.” This highlights a broader concern: as AI becomes increasingly integrated into critical infrastructure, its susceptibility to relatively simple manipulation poses a significant threat.
As an alternative to relying on AI governance, Buterin advocates for an “info finance” approach. This system would function as an open market were AI models are rigorously vetted for security flaws. “Anyone can contribute their models, which are subject to a spot-check mechanism that can be triggered by anyone and evaluated by a human jury,” Buterin explained, offering a more secure and transparent framework for utilizing AI in financial contexts.The proposed system aims to proactively identify and address vulnerabilities before they can be exploited, mitigating the risks associated with automated decision-making.