Home » Technology » Google says its AI-based bug hunter found 20 security vulnerabilities

Google says its AI-based bug hunter found 20 security vulnerabilities

AI Bug Hunter “Big Sleep” Uncovers First Vulnerabilities

Google’s AI system identifies 20 flaws in open-source software, signaling a new era in automated security testing.

Google’s cutting-edge AI, codenamed “Big Sleep,” has successfully identified and reported its inaugural batch of security vulnerabilities. The sophisticated system, developed by DeepMind and Google’s elite Project Zero team, has pinpointed 20 flaws.

Automated Discovery Reaches New Frontier

Heather Adkins, Google’s vice president of security, announced the breakthrough, detailing that Big Sleep flagged issues primarily within widely used open-source projects. Among those affected are the FFmpeg audio and video library and the ImageMagick image-editing suite. The specific details of these vulnerabilities remain undisclosed, a standard practice while fixes are pending.

Royal Hansen, Google’s VP of Engineering, hailed the achievement as a “new frontier in automated vulnerability discovery.” While the AI agent autonomously found and reproduced each flaw, a human expert reviews these findings to ensure their quality and actionability before reporting, as confirmed by Google spokesperson Kimberly Samra.

AI in Cybersecurity: Promise and Peril

Big Sleep joins a growing field of AI-driven security tools, including RunSybil and XBOW. XBOW, for instance, has achieved notable success, even topping leaderboards on the bug bounty platform HackerOne. These advancements highlight the growing capability of AI to assist in cybersecurity efforts.

“To ensure high quality and actionable reports, we have a human expert in the loop before reporting, but each vulnerability was found and reproduced by the AI agent without human intervention.”

Kimberly Samra, Google Spokesperson

The potential of these tools is substantial, yet concerns about AI-generated “hallucinations” persist within the developer community. Some maintainers have voiced frustration over inaccurate bug reports, likening them to “AI slop.” Vlad Ionescu, CTO of RunSybil, acknowledged Big Sleep as a “legit” project, citing its strong design and backing by experienced teams like Project Zero and DeepMind.

The cybersecurity industry is increasingly leveraging AI, with studies indicating a rise in AI’s contribution to threat detection. For example, a 2023 report found that AI tools were responsible for identifying 30% more vulnerabilities in code reviews compared to traditional methods (Comparitech 2023).

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.