Google Launches bug Bounty Program, Offering up to $30,000 for AI Security Flaws
MOUNTAIN VIEW, CA – Google is bolstering the security of its artificial intelligence products with a new bug bounty program, offering rewards of up to $30,000 to researchers who identify and report vulnerabilities. The initiative, announced today by Google security managers Jason Parsons and Zak Bennett, marks a meaningful expansion of the company’s existing security efforts and a focused call for external expertise in safeguarding its rapidly evolving AI ecosystem.
This program arrives as tech giants race to integrate AI into core products, increasing the potential attack surface and the stakes for security breaches. While google has previously rewarded security researchers – distributing $430,000 (370,000 euros) since October 2023 for identifying flaws relevant to its abuse hunting program – this dedicated framework establishes clear guidelines, eligible products, and a tiered payout structure specifically for AI-related vulnerabilities. The move aims to proactively identify and address weaknesses before they can be exploited, protecting users and reinforcing trust in Google’s AI offerings.
The bounty program will prioritize reports detailing “rogue actions” – attacks that compromise account status or data – and sensitive data leaks. Examples cited include scenarios where a voice assistant coudl be manipulated to unlock a door or exfiltrate private emails. Critical flaws discovered in flagship services like Google search and Gemini will be eligible for rewards up to $20,000, with potential bonuses increasing the payout to $30,000 based on report quality.
Notably, Google has excluded issues related to AI-generated content, such as the spread of hate speech, from the bounty program. The company stated it believes these challenges require “multidisciplinary efforts and long-term monitoring,” and encourages users to report such abuses through existing product-integrated tools. Google hopes the program will attract a global network of “ethical” hackers to refine the security of its increasingly central AI technology.