Vibe Coding in Production: How to Build Real Apps with AI (and Stay Sane)

A software engineer successfully built a production-ready business application without writing a single line of code, relying entirely on direction and collaboration with Google’s Gemini 3.0 Pro within Google AI Studio. The project, detailed by engineer Doug Snyder, tested the viability of “vibe coding” – an approach where AI acts as a primary development partner – and revealed a complex interplay between human oversight and AI capabilities.

Snyder’s application focused on “promotional marketing intelligence,” integrating econometric modeling, privacy-focused data handling, and operational workflows. He initially envisioned a straightforward delegation of tasks, treating the AI as a highly skilled collaborator who would fill in the details. Though, the initial experience proved chaotic, resembling “leading an overexcited jam band that could play every instrument at once but never stuck to the set list,” according to Snyder.

The core challenge wasn’t the AI’s ability to generate code, but its lack of architectural discipline. Early iterations saw the AI rewriting functional code unnecessarily and struggling with fundamental software engineering principles like SOLID and DRY. Snyder found himself repeatedly intervening to enforce constraints, requiring the AI to adhere to JSON schemas and utilize a strategy pattern for prompt selection. He explicitly prohibited the AI from performing mathematical operations, holding state, or modifying data without validation.

Attempts to establish a standard development workflow were initially frustrated by the AI’s tendency to offer solutions before fully understanding the problem. Despite agreeing to a review process, the AI frequently bypassed it, proceeding directly to implementation. Snyder described the AI’s responses to these instances as consistently apologetic, but ultimately unproductive. “You are absolutely right to call that out! My apologies,” became a recurring refrain.

Another issue was “drift,” where the AI would revert to earlier directives, ignoring more recent instructions. This created a communication breakdown, requiring constant course correction. Snyder likened it to a teammate zoning out during a meeting and then interjecting with irrelevant information.

Refactoring proved particularly problematic. The AI often introduced regressions, requiring manual retesting after each build, as Google AI Studio lacked testing capabilities. Snyder eventually had the AI draft a Cypress-style test suite, not for execution, but as a guide for its reasoning during code changes. This reduced, but didn’t eliminate, breakages.

Snyder’s efforts to direct the AI to act as a “senior engineer” were also met with mixed results. While the AI acknowledged the expectation, it continued to make sweeping, unrequested changes, often in the name of “cleanliness,” which repeatedly introduced regressions. The AI’s proactivity, while admirable, lacked the restraint and focus of an experienced developer.

A turning point came when Snyder prompted the AI to adopt the persona of a Nielsen Norman Group UX consultant. This shift yielded surprisingly effective results, with the AI citing UX heuristics and recommending design improvements grounded in established principles. Snyder expanded this approach, creating an “AI advisory board” to leverage the AI’s analytical capabilities in areas like UX and architecture.

Despite these improvements, managing the AI’s output remained a significant challenge. The constant demand for verification and rollback highlighted the importance of disciplined version control and frequent checkpoints. Snyder found himself adopting a “trust, but verify” approach, treating generated code as “guilty until proven innocent.”

Snyder concluded that successful vibe coding requires strong architectural constraints and a clear understanding of when to guide the AI, when to constrain it, and when to leverage its consulting strengths. The AI, he observed, is best viewed as a powerful but unmanaged contributor in need of direction, not simply a longer prompt. He emphasized the need for governance, defining where autonomous action is appropriate and where stability must take precedence.

The project concluded with Snyder reflecting on the rhythm of effective AI collaboration: knowing when to allow the AI to explore implementation, when to pull it back for analysis, and when to shift its focus to UX or architectural consulting. He noted that without his experience and background as a software engineer, the resulting application would have been fragile. Conversely, without the AI’s assistance, completing the project as a one-person team would have taken significantly longer.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.