
US-based AI developer Anthropic has reported extensive misuse of its Claude model by firms operating out of China. Specifically, three entities from the PRC established more than 24,000 fake user accounts and queried the system over 16 million times to extract information for training their own neural networks, according to The Wall Street Journal.
Anthropic alleges that DeepSeek communicated with Claude approximately 150,000 times, Moonshot over 3.4 million times, and MiniMax in excess of 13 million times. The company believes these interactions were leveraged for what is known as “distillation”—a technique that enables the rapid advancement of proprietary AI products leveraging the outputs of a stronger model.
Anthropic stresses that such activity surpasses standard commercial rivalry and touches upon matters of US national security. The firm posits that replicating American models could result in the deployment of unsecured AI capabilities within foreign military, intelligence, and surveillance systems.
Previously, OpenAI accused DeepSeek of employing a comparable method to replicate the performance characteristics of its own models. However, experts note that this technology itself possesses legitimate applications.
This scenario unfolds amid the rapid advancements seen among Chinese AI companies. Moonshot and MiniMax recently unveiled updated models boasting enhanced logical reasoning and coding proficiency, while DeepSeek is preparing to launch the next iteration of its flagship system.