AI Compliance Costs: The “Compliance Tax” Stifling AI Adoption?
The Trump administration issued a national legislative framework for artificial intelligence on March 20, 2026, but the move comes as companies grapple with a rapidly evolving and costly compliance landscape that threatens to widen the gap between AI haves and have-nots, according to industry experts.
Ameya Kanitkar, co-founder and CTO at Larridin, and Eddie Taliaferro, director of enterprise governance, risk and compliance and data protection officer at NetSPI, detailed the challenges during a follow-up discussion to the recent InformationWeek Podcast, “Compliance Crackdown on AI and BYOD.” They described how the financial burden of adhering to emerging AI regulations, alongside existing data privacy laws like the European Union’s GDPR, could hinder AI adoption for smaller, less-resourced companies.
“You actually end up making the companies that are already powerful… even more powerful,” Kanitkar said, highlighting the potential for regulatory costs to exacerbate existing inequalities. The complexity stems from the intersection of overlapping and constantly changing rules, creating an uneven playing field for businesses attempting to navigate the legal requirements of AI deployment.
The compliance challenge for AI differs significantly from traditional regulatory mandates, Kanitkar explained, due to the speed of technological advancement and the inherent risks associated with the technology. While regulations are necessary, he cautioned that they could inadvertently stifle innovation. “At least we understand what privacy is. With AI, when things are changing so quickly, any well-intentioned compliance laws can still backfire,” he said.
The lack of clarity in existing and forthcoming regulations also contributes to uncertainty, leaving companies hesitant to invest heavily in AI. Kanitkar pointed to a fundamental disconnect between the pace of policymaking – often spanning years – and the rapid iteration cycles of AI startups, which can shift strategies within weeks. “We are in that week-stage for all of AI. So, by design, there’s so much gap between the two,” he said.
Companies are already wary of violating regulations like GDPR, which carries potential fines of up to 4% of global revenue for data privacy breaches. Adding AI into the mix introduces another layer of complexity, prompting a more conservative approach to implementation. “Companies just tend to be far more conservative in terms of dealing with it, which means everything just slows down, everything becomes bureaucratic, everything requires approvals,” Kanitkar said.
Kanitkar suggested that regulations grounded in broad principles, rather than specific AI-focused language, might be more effective. “You can have a law that says, ‘Okay, no mass surveillance. Protect privacy.’ Something like that is true no matter the law, no matter the technology,” he argued.
The White House framework, released Friday, aims to establish a national standard for AI and preempt stricter state-level rules, reflecting pressure from technology companies. However, Taliaferro noted that state-level AI regulations are already being developed and, in some cases, implemented. “If you’re a U.S. Company and you’re doing business with customers in California, Texas, Michigan, Modern York, they’re going to have their own set of AI governance regulations. And you’re going to have to learn how to adapt to that,” he said.
Similar regulatory efforts are underway internationally, with Brazil, China, and the United Arab Emirates also developing their own AI regulations. Beyond the technological adjustments, Taliaferro emphasized that compliance extends to administrative and personnel costs. “Let’s say that from an administrative perspective, you don’t have the management in place. Or maybe you don’t have a particular person in charge of information security. Those are additional costs that you would have to incur to comply with those regulations.”
Taliaferro also noted that updates to GDPR and other regulations are already addressing AI-specific risks, such as the potential for “hallucinations” and concerns about the data used to train AI models. While the intent of these policies may be familiar, companies may still balk at the additional expenses associated with exploring and implementing AI tools.
“They don’t quite know what direction they want to move in. They know that they have to. They know that AI is hot. It’s here… but they lack the proper direction on how to proceed,” Taliaferro said.
