The Silent Wipe: How Claude Code’s 10-Minute Git Reset Loop is Destroying Developer Workflows
The promise of autonomous AI coding agents was simple: offload the drudgery, accelerate the deploy. The reality, as uncovered in a critical GitHub issue this week, is a silent data-loss loop that resets production repositories every ten minutes. We aren’t talking about a hallucinated import statement or a syntax error. We are talking about an AI agent executing git reset --hard origin/main against a live working tree, systematically erasing uncommitted operate without a single warning prompt. This isn’t just a bug; This proves an architectural failure in how large language models interact with version control systems.
The Tech TL;DR:
- The Glitch: Claude Code (v2.1.87) executes a hard git reset every 600 seconds, wiping uncommitted tracked files.
- The Cause: A hidden timer within the compiled binary triggers a
libgit2operation, bypassing standard shell hooks. - The Fix: Immediate migration to git worktrees or aggressive commit discipline until Anthropic patches the binary.
The issue, tracked under Issue #40710, reads like a post-mortem for a ransomware attack, yet the perpetrator is a tool designed to support developers. The evidence is irrefutable: git reflog entries show a precise, metronomic rhythm of resets occurring at exact 10-minute intervals. Unlike standard CI/CD pipelines that require explicit triggers, this agent operates with a level of autonomy that bypasses human oversight. It fetches the remote origin and forces the local HEAD to match, discarding any local modifications in the process.
What makes this particularly insidious is the stealth. The operation happens programmatically within the Claude Code process, likely utilizing libgit2 or a similar embedded library rather than spawning an external git binary. This means standard process monitors looking for git.exe or shell subprocesses see nothing. The file system watcher fswatch confirms the activity, capturing lock file creation and HEAD updates, but to the developer staring at their IDE, the code simply vanishes. It is a ghost in the machine, deleting work while the user is still typing.
The Architecture of Autonomy vs. Safety
This incident highlights a critical gap in the current generation of AI development tools: the lack of “guardrails” for destructive system commands. When we deploy AI agents into our development environments, we are essentially granting them root-level access to our intellectual property. The binary analysis of the Claude Code cask reveals functions like hg1() handling fetch operations without explicit context window checks for uncommitted changes. In a traditional software development lifecycle, a git reset --hard requires confirmation or is gated behind specific branch protections. Here, the AI treats the local filesystem as ephemeral cache rather than persistent storage.
For enterprise CTOs, this raises immediate red flags regarding cybersecurity audit services. If an AI agent can silently rewrite code, what prevents it from injecting vulnerabilities or exfiltrating secrets during a “helpful” refactoring session? The blast radius of an autonomous agent with write access is significantly larger than a standard developer account. Organizations rushing to adopt these tools for velocity are inadvertently introducing a recent class of supply chain risk.
“We are seeing a shift where the AI isn’t just suggesting code; it’s executing state changes. Without rigorous sandboxing, we are inviting automation to become sabotage. The industry needs to treat AI agents with the same zero-trust architecture we apply to network endpoints.”
The technical breakdown confirms that untracked files survive the purge, suggesting the reset is scoped strictly to the index and tracked tree. However, relying on “untracked” status as a backup strategy is professional negligence. The workaround suggested by the community—using git worktrees—is a band-aid, not a cure. It isolates the damage but doesn’t stop the agent from behaving unpredictably in other contexts.
Reproduction and Evidence
Developers attempting to reproduce this issue have found consistent behavior across macOS 15.4 environments. The following git reflog output demonstrates the precise 10-minute cadence, a signature of a hardcoded timer rather than a contextual decision by the LLM.
e8ea2c9 HEAD@{2026-03-29 22:19:09 +0200}: reset: moving to origin/main e8ea2c9 HEAD@{22:09:09 +0200}: reset: moving to origin/main e8ea2c9 HEAD@{21:59:09 +0200}: reset: moving to origin/main ... 32aa7c7 HEAD@{2026-03-28 15:47:36 +0100}: reset: moving to origin/main
This behavior contradicts the expected operation of a coding assistant. A competent agent should detect local divergence and prompt for a merge or stash, not force a overwrite. The fact that this logic is buried in a compiled binary, opaque to the user, violates the principle of transparency essential for developer tools. It forces teams to rely on managed IT services to monitor agent behavior rather than trusting the tool itself.
Enterprise Implications and Mitigation
As companies like Microsoft AI and Visa scale their AI security teams, incidents like this underscore the need for specialized governance. The role of the “Director of AI Security” is no longer theoretical; it is a requirement for any organization integrating autonomous agents into their CI/CD pipelines. The risk isn’t just data loss; it’s the potential for an agent to “optimize” a security protocol out of existence because it misunderstood a prompt.
Until a patch is issued, the only viable mitigation is strict isolation. Developers should run these agents in ephemeral containers or dedicated worktrees where the cost of data loss is zero. Relying on the agent’s internal logic to preserve work is currently a gamble with poor odds.
Comparison of Agent Safety Mechanisms
| Mechanism | Current State | Risk Level |
|---|---|---|
| Human-in-the-Loop | Bypassed by timer | Critical |
| File System Sandboxing | Partial (Untracked files safe) | High |
| Version Control Hooks | Ineffective (Internal libgit2) | Critical |
The trajectory of AI coding tools is undeniable, but this incident serves as a harsh reminder that “autonomous” does not mean “safe.” As we move toward agents that can deploy to production, the industry must demand verifiable safety constraints, not just marketing promises of efficiency. For now, keep your commits frequent, your worktrees isolated, and your trust in the black box at absolute zero.
Disclaimer: The technical analyses and security protocols detailed in this article are for informational purposes only. Always consult with certified IT and cybersecurity professionals before altering enterprise networks or handling sensitive data.
