Enterprise agentic AI is rapidly moving from assistive to autonomous. Large language models are now wrapped in agents that can route customer claims, draft contracts, trigger payments, change configurations, or decide which alerts deserve human attention—or the attention of another agent.
Today, 13% of major enterprises globally are substantially on this path, with more than ten agentic workflows operating in the mainstream across their organizations, according to EDB’s 2025 Sovereignty Matters research. These organizations generate 5x the ROI of their peers. They are sovereign in their AI and data, highly hybrid, and innovating with 2.5x greater confidence than other enterprises.
Though, when those systems go wrong—denying a loan unfairly, leaking sensitive data, hallucinating a compliance obligation, or escalating a customer into the wrong workflow—the question every CIO eventually faces is painfully simple: Who is responsible?
Right now, the answer is frequently enough unclear. And that uncertainty is becoming a business risk. As agentic AI systems learn from new data, adapt to new contexts, and behave in ways even their makers can’t always fully predict, they create a modern responsibility gap: harm occurs, but accountability is hard to pin to a single human decision.
Customary legal frameworks aren’t helping much. Product liability is built for products that behave like they did when they left the factory. Agentic AI does not. It can be fine-tuned, connected to tools, updated weekly, and reshaped by prompts and proprietary data long after it’s deployed.
At the same time, ideas like AI legal personhood are too abstract for enterprise governance—and worse, risk becoming a convenient shield for the humans and firms that profit from deployment.
There’s a more practical model hiding in plain sight.
Agentic AI behaves more like a trained animal than a manufactured tool
If you’re a CIO, you already know the uncomfortable truth: agentic AI isn’t “programmed” in the classic if-then sense. It’s trained. That’s not just semantics—it’s a governance clue.
Dogs have agency. They act independently, sometimes unpredictably. Yet they are not legal persons. That combination—agency without personhood—is exactly where today’s agentic AI systems sit.
Training is closer to shaping behavior than specifying it. Like a dog, an agentic AI system can generalize from experience, respond unexpectedly to a novel stimulus, and develop bad habits if rewarded for them. And like dog breeders, developers can create systems with strong baseline “temperament”—but they can’t perfectly foresee behavior in every new habitat.
Dog ownership law generally starts from a simple premise: if you choose to bring a potentially unpredictable actor into society for your benefit,you bear the risk of what it does. Simply put, the owner becomes the de facto responsible party.