Key Takeaways from Sonar’s State of Code Developer Survey: AI-Generated Code Concerns
Here’s a summary of the key findings from Sonar’s latest State of Code Developer Survey:
* Lack of Trust: A vast majority (96%) of developers don’t fully trust the functional correctness of AI-generated code.
* Increasing AI Usage: AI is generating a meaningful and growing portion of developers’ code – currently 42% (up from 6% in 2023), projected to reach 65% by 2027.
* Insufficient Verification: Less than half (48%) of developers always check AI-generated code before committing it.
* Verification Takes time: 38% of developers find verifying AI-generated code takes more time than reviewing human-written code.
* Apparent Correctness is Deceptive: 61% of developers agree that AI-generated code often looks correct, but isn’t.
* Confirmed Bug Rate: Research from CodeRabbit confirms AI generates 1.7x more issues (including major ones) than human developers.
* Common AI Tool Use: GitHub Copilot (75%) and ChatGPT (74%) are the moast popular AI coding assistants. AI is frequently used for prototyping (88%) and production software (83%), and surprisingly frequently enough for customer-facing applications (73%).
* Security Risks wiht Personal Accounts: Over a third (35%) of developers use personal accounts for AI tools, rising to 52% for ChatGPT and 63% for Perplexity, perhaps exposing company data.
* Top Concerns: Data exposure (57%), small vulnerabilities (47%), and severe vulnerabilities (44%) are major concerns related to AI code generation.
* The Need for Trust & Verification: the report emphasizes that simply generating code faster isn’t enough; ensuring its trustworthiness and efficient verification are crucial.
In essence, the survey highlights a growing reliance on AI in coding, coupled with significant concerns about code quality, security, and the need for thorough verification processes.