New Delhi: Leading AI firms Anthropic, Google, and OpenAI have reportedly joined forces to counter attempts by Chinese rivals to copy their advanced AI models, a practice known as adversarial model distillation.
The companies are sharing information through the Frontier Model Forum, a nonprofit founded with Microsoft in 2023, to detect and prevent unauthorized replication of their proprietary large language models (LLMs).
Model distillation involves training a smaller “student” model to mimic a larger “teacher” AI, often at lower cost. While legitimate uses exist, the US firms allege that some Chinese companies are extracting outputs from their models to train competitive products, undercutting pricing and threatening billions in revenue.
OpenAI first raised concerns in 2025 when Chinese firm DeepSeek released its R1 model, allegedly built using data from US AI models.
The collaboration highlights growing concerns over national security risks, as improperly distilled models may lack safety safeguards. US officials have warned that adversarial distillation could allow foreign actors to develop AI tools with potentially harmful applications.
This effort mirrors practices in cybersecurity, where companies share intelligence to prevent attacks. While information-sharing among AI firms is currently limited by antitrust uncertainties, coordination through the Frontier Model Forum aims to strengthen detection and response.
Anthropic, Google, and OpenAI have declined to provide detailed evidence but note that model extraction attempts are increasing. The initiative underscores the high stakes in the global AI race, as US companies seek to protect their technology, investments, and market share from international competitors.
