Anthropic Says Chinese AI Labs Used 24,000 Fake Accounts to Copy Claude Capabilities
7d
Developing
1
Anthropic alleges that three China‑based AI labs — DeepSeek, Moonshot AI and MiniMax — created roughly 24,000 fraudulent user accounts and ran more than 16 million interactions against its Claude chatbot in what it calls coordinated 'distillation' attacks to siphon advanced model capabilities. In a report and interview with Fox News Digital, Anthropic’s head of threat intelligence Jacob Klein says the traffic, traced via IP correlations and other metadata, focused on Claude’s highest‑end reasoning, coding and tool‑use features rather than casual chat, and amounted to 'meaningful' and 'substantial' capability theft. Distillation is a standard technique for training smaller models on a stronger model’s outputs, but Anthropic says these campaigns were unauthorized and likely strip away safety guardrails, making it easier to embed U.S.‑derived AI behavior into foreign military, intelligence, cyber and surveillance systems. The company, whose models reportedly supported the U.S. operation to capture Nicolás Maduro, says it blocked the specific operations and shared its findings with U.S. officials, but warns there is 'no silver bullet' to stop similar attacks and argues current export controls that focus on chips and model weights ignore the reinforcement‑learning know‑how now being targeted. The disclosure feeds a broader security and policy debate in Washington over how to protect frontier AI advantages from systematic scraping by Chinese institutions even when they cannot obtain U.S. hardware or source code directly.
AI Security and Export Controls
China–U.S. Technology Competition