United States-based artificial intelligence company Anthropic has accused three Chinese unicorns—DeepSeek, Minimax, and Moonshot AI—of unlawfully extracting capabilities from its Claude model to enhance their own systems, as per a report by CNN Business. The US firm raised national security concerns, alleging that the theft, done through distillation, could lead to potential risks. The alleged theft involved the creation of approximately 24,000 fake accounts to train Chinese models using over 16 million exchanges with Claude.
The company cautioned that models developed in this manner might lack the necessary safety measures implemented by companies like itself, making them susceptible to cyberattacks and potential use in biological weapons. It was highlighted that such models could empower authoritarian governments to utilize advanced AI for offensive cyber operations, disinformation campaigns, and mass surveillance, emphasizing the urgency for action. CNN has sought comments from DeepSeek, Minimax, and Moonshot AI regarding the accusations.
DeepSeek’s rapid ascent in China, earning the moniker “AI tigers,” has raised concerns about the effectiveness of US export controls. These three unicorns currently hold positions among the top 15 models on the prominent Artificial Analysis leaderboard, the report noted. Anthropic emphasized that the distillation attempts underscored the importance of export controls, stating that cutting-edge model development relies on access to advanced chips.
Previously, OpenAI also made similar allegations, accusing DeepSeek of leveraging capabilities developed by OpenAI and other US frontier labs without authorization. Anthropic PBC was recently designated a “Supply Chain Risk (SCR)” by the US government, with the company’s CEO apologizing for criticizing President Donald Trump. The firm clarified that the designation applies solely to the use of Anthropic’s Claude models within Department of War contracts, not to all customers with such contracts.
