The US administration has announced plans to increase cooperation with American artificial intelligence companies to combat large-scale campaigns by foreign entities, primarily based in China, aimed at stealing technological advancements. Michael Kratsios, director of the White House Office of Science and Technology Policy, revealed that foreign entities were exploiting US firms through a process called “distillation,” as reported by the BBC. Kratsios highlighted that the Chinese strategy seeks to systematically undermine American research and development while gaining access to proprietary information.
To address these challenges, the White House intends to provide US AI companies with more information regarding the tactics and actors involved in distillation campaigns. Additionally, there are plans to enhance coordination with companies to counter these attacks and establish a set of best practices for identifying, mitigating, and addressing such incidents. The administration also aims to explore methods to hold these foreign actors accountable for engaging in distillation practices.
Through distillation campaigns, companies typically manage numerous individual accounts for a specific AI chatbot or tool, enabling them to blend in as regular users. These accounts then coordinate efforts to uncover information about AI models that are not meant to be public, which is then utilized in building and training their own AI models. As the techniques for detecting and mitigating industrial-scale distillation become more advanced, foreign entities relying on such methods for developing AI capabilities may face challenges in ensuring the integrity and reliability of their models.
A spokesperson from China’s US embassy in Washington DC dismissed the accusations, asserting that China’s technological progress is a result of its dedication, effort, and international collaboration. In a separate incident, US-based artificial intelligence company Anthropic accused three Chinese unicorns—DeepSeek, Minimax, and Moonshot AI—of unlawfully extracting features from its Claude model to enhance their own systems.
