Anthropic, a leading U.S. Artificial intelligence company, accused three Chinese AI labs – DeepSeek, Moonshot AI, and MiniMax – of orchestrating a large-scale effort to extract proprietary capabilities from its Claude chatbot, alleging a coordinated campaign of intellectual property theft. The accusations, made public Monday, mirror similar claims leveled against Chinese firms by OpenAI, the creator of ChatGPT, earlier this month.
According to Anthropic, the three labs employed a technique known as “distillation,” which involves using the outputs of a more advanced AI model – in this case, Claude – to train and improve the performance of less capable systems. The company stated that the Chinese firms generated over 16 million interactions with Claude through approximately 24,000 fraudulent accounts, violating Anthropic’s terms of service and regional access restrictions.
“These campaigns are growing in intensity and sophistication,” Anthropic said in a statement. “The window to act is narrow.”
While distillation is a common practice within the AI industry, often used to create smaller, more efficient versions of existing models, Anthropic argues that the scale and intent of these operations constitute a deliberate attempt to circumvent the substantial investment and expertise required to independently develop comparable AI capabilities. The accusations approach as U.S. Officials debate stricter export controls on advanced AI chips, a policy aimed at slowing China’s progress in the field.
DeepSeek, in particular, has emerged as a significant player in the AI landscape, releasing an open-source reasoning model last year that reportedly rivaled the performance of leading American chatbots at a fraction of the cost. The company is expected to release its latest model, DeepSeek V4, which reports suggest may surpass Claude and ChatGPT in coding tasks. Anthropic tracked over 150,000 exchanges from DeepSeek focused on improving foundational logic and alignment, specifically seeking censorship-safe alternatives to policy-sensitive queries.
Moonshot AI generated more than 3.4 million exchanges targeting agentic reasoning, tool use, coding, data analysis, and computer vision. The firm recently released a new open-source model, Kimi K2.5, and a coding agent. MiniMax, according to Anthropic, ran the largest operation, generating over 13 million exchanges. Each campaign heavily focused on areas where Claude is considered a leader: coding, agentic reasoning, and tool use.
To bypass Anthropic’s restrictions on commercial access from China, the labs allegedly utilized proxy services to manage the networks of fraudulent accounts. Anthropic warned that models built through illicit distillation are unlikely to retain the safety guardrails designed to prevent misuse, potentially enabling the development of dangerous applications like bioweapons or malicious cyber tools.
OpenAI previously accused DeepSeek of using distillation to mimic its products, raising concerns about the broader implications for intellectual property protection and national security within the rapidly evolving AI sector. Anthropic echoed this sentiment, calling for coordinated action from industry players and policymakers to address a threat it believes no single company can tackle alone.
As of Tuesday, February 24, 2026, none of the accused Chinese firms have publicly responded to Anthropic’s allegations.