News

OpenAI, Anthropic, and Google join forces through Frontier Model Forum to fight Chinese model distillation

Apr 7, 2026

Key Points

  • OpenAI, Anthropic, and Google are sharing intelligence through the Frontier Model Forum to detect when Chinese competitors illegally distill their AI models by querying them at scale.
  • Distillation remains nearly undetectable in model weights, making coordinated API spending thresholds the most realistic defense against large-scale model copying.
  • Distillers can evade individual labs' spending alerts by splitting queries across all three companies simultaneously, undermining the collaboration's ability to catch violations.

Summary

OpenAI, Anthropic, and Google are collaborating through the Frontier Model Forum—an industry nonprofit they founded with Microsoft in 2023—to detect and combat what they call "adversarial distillation," where competitors create knockoff versions of their AI models by querying them at scale and extracting training data. The three companies are sharing information to identify terms-of-service violations, particularly by Chinese competitors seeking to close the capability gap without bearing the enormous training costs.

The collaboration underscores a genuine tension in the AI market. Frontier models are extraordinarily expensive to train—hundreds of billions of dollars according to recent reporting—and companies hope to amortize those costs over years. But the shelf life of a model shrinks dramatically if it's being commoditized and copied. Distillation accelerates that timeline even further, which is why the companies see it as an existential threat to their unit economics.

Yet the hard part is proving distillation happened at all. There's no forensic signature in model weights. The only smoking gun is when a distilled model accidentally preserves training data or identifying language—like saying "I'm Claude"—but a disciplined actor can scrub those references or blend tokens from multiple sources to obscure the origin. This is where the Frontier Model Forum's shared intelligence becomes valuable: coordinated detection across three labs makes it harder for distillers to hide.

The detection problem remains unsolved. The transcript suggests the most realistic mitigation is stricter access control on the API itself. Rather than blocking countries or companies outright, labs could implement tiered know-your-customer checks tied to token spend. A $200 consumer plan delivers too little volume to distill a model. At $5,000 per month, a KYC review triggers. At $10 million monthly spend, it tightens further. The threshold for distillation—likely $10 million to $100 million depending on the model—becomes the regulatory trigger.

The catch: if distillation happens across all three labs simultaneously, and across smaller organizations that are harder to vet, any single lab's spending threshold gets diluted by a factor of three or more. A would-be distiller splits queries across OpenAI, Anthropic, and Google to stay below each lab's alert threshold individually while accumulating enough tokens collectively to do the job.

This collaboration also sits in tension with the broader geopolitical argument around AI safety. If the U.S. were to pause or slow model releases—as some have proposed—it wouldn't matter if China simply distills existing frontier models three months behind. The delay buys almost nothing.