The Distillation War: How Anthropic's Disclosure Revealed the Structural Anatomy of US-China AI Theft
An analysis of why Chinese AI companies could secretly train on Claude - and why that access existed in the first place. We examine the structural vulnerabilities of the open-API model, the link to AI chip export controls, the enforcement vacuum in which these attacks operated, and the strategic significance of Anthropic and OpenAI going public simultaneously.
AI-assisted draft · Editorially reviewedThis blog content may use AI tools for drafting and structuring, and is published after editorial review by the Trensee Editorial Team.
Prologue: The Door Was Open
When Anthropic disclosed that three Chinese AI companies had secretly trained on Claude, the first question many people asked was: "How was this even possible?" The answer cannot be found in the moral judgments of any single company. The structure allowed it.
For years, the AI industry built an open ecosystem in which cutting-edge models were accessible from anywhere in the world via API. That openness drove innovation and revenue — and simultaneously created a structural vulnerability. This article examines the "why was it possible" question. Not as a question of technology, but as a question of structure.
1. Why the Door Was Open: The Open-API Dilemma
When Openness Becomes a Weapon
Frontier AI models like Claude, GPT-4, and Gemini are served globally through APIs. The business logic is simple: the more usage, the more revenue, and the more data and feedback accumulates.
But that openness contains an inherent paradox. Anyone with API access is a potential distillation attacker. This incident was not a sophisticated hack. The attackers simply accessed a public API using fraudulent accounts, disguised as legitimate users, and sent queries at scale. It was like walking through an unlocked door.
The Limits of Geographic Blocking
Anthropic restricts direct Claude access from mainland China. But circumventing that restriction through proxy services and VPNs is not technically difficult. The "Hydra Cluster" infrastructure was precisely the routing layer designed for that circumvention. Geographic blocking was never a complete defense.
2. AI Chip Export Controls and Distillation: The Connected Threads
What Anthropic Explicitly Linked
One of the most significant aspects of the disclosure was Anthropic's direct and explicit connection of distillation attacks to chip export controls. In its official statement, Anthropic argued:
"Distillation attacks of this scale require access to advanced chips. Distillation attacks therefore reinforce the necessity of export controls. Restricting chip access limits not only direct model training but also the scale of illegal distillation."
The implication is clear. Without access to high-performance AI chips, a model trained through large-scale distillation cannot actually be deployed and run at competitive scale.
The Blackwell Chip Irony
On the same day as Anthropic's disclosure, Reuters reported evidence that DeepSeek trained its models on Nvidia Blackwell chips — chips covered by US export restrictions. This has not been officially confirmed, but if accurate, the structure is as follows:
[US Export Controls] ──blocked──> [Official Blackwell Sales] ──bypassed?──> DeepSeek
│
▼
[Distillation Attack] ──unauthorized──> Claude API ──> Capability absorption
In other words, there is potential evidence of regulatory circumvention through both the hardware (chips) and software (model capabilities) channels simultaneously.
3. Asymmetric Cost Structure: Why Distillation Was a Rational Choice
The Cost of Training Frontier Models
Training a model at the level of GPT-4 or Claude 3 from scratch is estimated to require hundreds of millions to billions of dollars. Thousands of high-performance AI chips must run for months. For Chinese companies whose access to the latest chips is restricted by US export controls, this path is an even steeper wall.
The Economics of Distillation Attacks
By contrast, the marginal cost of a distillation attack is dramatically lower.
| Factor | From-Scratch Training | Distillation Attack |
|---|---|---|
| Compute cost | Hundreds of millions to billions of dollars | API query cost only |
| High-end chips required | Thousands of GPUs | Only for running the resulting model |
| Development timeline | Months to 1+ years | Within the campaign window |
| Training data | Must be independently collected | Replaced by Claude's responses |
The capabilities gained through distillation are not identical to the original model in full. But for rapidly improving specific high-value capabilities — agentic reasoning, complex coding — the effect is sufficient. Cost asymmetry is what makes distillation a rational choice.
4. The Enforcement Vacuum
Can Terms of Service Actually Be Enforced?
Anthropic's API Terms of Service prohibit using Claude's responses to train competing models. But actually enforcing that clause against Chinese companies is an entirely different matter.
Structural barriers to enforcement:
- Jurisdiction: US court judgments cannot be compelled for enforcement in Chinese courts
- Evidentiary burden: Legally proving that AI outputs were actually used to train a competing model is extremely difficult
- Copyright uncertainty: Whether AI model outputs themselves are protectable under copyright law remains legally unsettled
- Chinese government posture: Beijing has no incentive to constrain its domestic AI companies from gaining competitive advantage
OpenAI's Congressional Letter: Regulation Over Litigation
Apparently recognizing this structural enforcement vacuum, OpenAI submitted a letter to the US House Select Committee on Strategic Competition on February 12, 2026 — weeks before Anthropic's public disclosure. Rather than pursuing legal action, OpenAI chose the legislative and regulatory path. Read in combination with Anthropic's public announcement, these two actions suggest that major AI industry players have pivoted from judicial enforcement toward political pressure.
5. The Strategic Logic of Simultaneous Disclosure
Coincidence or Coordination?
The concentration of Anthropic's and OpenAI's announcements in the same week is difficult to read as coincidence. The timeline:
- 2026-02-12: OpenAI submits letter to US House Select Committee (warning of DeepSeek distillation)
- 2026-02-23: Anthropic publishes disclosure of distillation attacks by DeepSeek, Moonshot AI, and MiniMax
- 2026-02-23: Reuters reports evidence of DeepSeek training on Blackwell chips
- 2026-02-24: Major US media outlets publish coordinated coverage
This timing coincided with the peak of US government debate over strengthening AI chip export controls. It is worth considering whether Anthropic's public disclosure was designed in part to supply concrete grounds for that policy argument — a convergence of corporate self-interest and national security framing.
The National Security Frame
Anthropic's disclosure went beyond describing technical damage, explicitly invoking security risks. The argument: "Models distilled without safety guardrails could be weaponized for cyberattacks, bioweapon development, and mass surveillance." This framing elevates a corporate competition dispute into a geopolitical security agenda — language that carries significantly more weight in Congressional deliberations than commercial grievance alone.
6. Who Is Destabilized by This Structure?
High Risk: All Frontier AI Companies Running Public APIs
The situation: The structure of serving Claude via API globally is itself the attack surface. If this is not an isolated incident but a structural condition, then every public API operator — OpenAI, Google, Meta — faces analogous exposure.
Defensibility: Detection technology and API security hardening can raise the cost of attack, but complete prevention is difficult. As long as API openness is maintained, the structural vulnerability remains.
Moderate Risk: Companies That Build on or Integrate US AI Models
The situation: Businesses built on Claude or GPT-4 APIs face the risk of shifting API access conditions and rising costs as a result of regulatory escalation.
Defensibility: Multi-vendor strategy to distribute dependency. In-house fine-tuning capability mitigates risk.
Lower Risk: Open-Source-Based Companies
The situation: Open-source models like Llama and Mistral have publicly available weights, so they are not distillation attack targets in the same sense. Ironically, tightening regulation on closed models could create relative advantage for the open-source ecosystem.
Defensibility: Direct risk is low, but if open-source models become part of broader AI security regulation discussions, there is potential indirect exposure.
7. Outlook: Six-to-Twelve-Month Scenarios
Scenario 1: Chip Export Control Tightening + Anti-Distillation Legislation (Probability: 70%)
If Anthropic's and OpenAI's lobbying achieves legislative traction, strengthened AI chip export controls may be paired with explicit trade and industrial law provisions prohibiting distillation attacks. In this scenario, API access conditions from US AI companies are likely to become increasingly regionalized.
Scenario 2: Rapid Advancement in API Defense Technology (Probability: 80%)
Now that this has been publicly confirmed at scale, investment in watermarking, response perturbation, and real-time anomaly detection is expected to accelerate sharply. Medium-term, the technology competition is likely to trend toward raising the cost and difficulty of distillation attacks — even if it cannot eliminate them.
Scenario 3: Chinese AI Accelerates Toward Independent Ecosystems (Probability: 60%)
As access to US AI models becomes more restricted, Chinese AI companies face growing pressure to develop independent training data generation, synthetic data pipelines, and proprietary models. Short-term, evasion techniques may be refined further. Medium-to-long-term, the strategic direction is likely to shift toward reducing dependence on US frontier models.
8. Decision-Making Guide
If You Provide an AI API Service
| Check Question | If Yes: Priority Action |
|---|---|
| Do you have regional API access restriction policies? | Strengthen proxy/VPN circumvention detection |
| Is identity verification strong enough at account creation? | Tighten authentication for research/educational accounts |
| Is multi-account anomalous traffic detection active? | Introduce Hydra Cluster pattern detection logic |
| Are you participating in industry intelligence-sharing channels? | Join an AI security intelligence sharing network |
If You Lead AI Policy or Strategy
| Check Question | If Yes: Priority Action |
|---|---|
| Do you know the regional restriction policies of your AI API providers? | Audit AI service access conditions across your supply chain |
| Do you have a procurement plan if chip export controls tighten further? | Evaluate multi-vendor and open-source parallel strategies |
| Are you monitoring ToS changes from your current AI providers? | Set up automated alerts for major AI provider policy changes |
9. What Not to Overestimate
Risk 1: "This disclosure will end distillation attacks"
Structural vulnerabilities will remain unless the underlying conditions change. As Anthropic's detection capabilities improve, attackers will also refine evasion techniques. This disclosure is better understood as the opening of a public arms race than as a ceasefire declaration.
Risk 2: "Anthropic's and OpenAI's motives are purely protective"
Multiple outlets have noted the political dimension of this disclosure. Reading announcements from companies whose interests align directly with the policy outcome — chip export controls — as pure victim statements is to miss important context. A balanced reading requires holding both dimensions simultaneously.
Risk 3: "Chip restrictions alone will solve this"
Chip export controls can limit the scale of distillation attacks, but they cannot eliminate the underlying incentive. As long as APIs remain open and cost asymmetry persists, attacks will continue — potentially at reduced scale, but not stopped.
Epilogue: The Dilemma of Openness and Protection
The open API ecosystem the AI industry spent years building enabled both innovation and global reach. That same openness is the structural cause of this incident. Technical solutions alone cannot resolve this dilemma.
Regulation, technical defense, diplomacy, and industry self-governance — all four layers must move simultaneously. When one layer is reinforced, pressure migrates to the others.
Part 3 of this series examines how AI model protection will evolve within this structure, and what options companies and policymakers actually have.
Key Action Summary
| Role | Check Now | Review Within 6 Months |
|---|---|---|
| AI API Provider | Regional access restrictions + account authentication strength | Develop Hydra Cluster detection logic |
| Enterprise AI Strategy Lead | Review current AI provider ToS and regional policies | Evaluate multi-vendor strategy and open-source alternatives |
| AI Policy Lead | Track US AI chip export control legislative developments | Analyze regulatory scenario impacts on your organization |
| Legal / Compliance | Identify ToS violation risk clauses in AI service contracts | Establish AI supply chain due diligence framework |
Frequently Asked Questions
Q1. Would closing the API entirely solve the problem?
Fully closing the API would reduce the distillation attack surface, but it would collapse the business model itself. The realistic direction is simultaneously strengthening access conditions (regional restrictions, identity verification, usage-purpose confirmation) and raising the bar on monitoring technology. Anthropic itself has already chosen "detection and defense hardening" — not "full closure" — as its official direction.
Q2. Are open-source AI models irrelevant to this debate?
They are not direct distillation attack targets in the same sense. But if this debate drives broader AI regulatory tightening, whether open-source models get included in that scope is a separate question that may arise. For now, the open-source ecosystem is positioned to receive relative benefit from the scrutiny on closed models.
Q3. Does the Chinese government know about these practices?
Nothing has been officially confirmed. But structurally, the Chinese government has no incentive to constrain its domestic AI companies from gaining competitive advantage by any means available. Analysts increasingly view this incident not as a single company's aberration, but as a structural phenomenon arising from the broader framework of US–China technology competition.
Q4. Doesn't simultaneous disclosure by two companies constitute collusion?
For information-sharing or coordinated lobbying to meet the legal bar for anticompetitive collusion, there must be conduct like price-fixing or market allocation. Public information disclosure and policy lobbying are standard corporate activities. Whether intentional coordination occurred is unconfirmed, but even if it did, that would not make it anticompetitive collusion under standard antitrust frameworks.
Related Reading
- Part 1: 16 Million Queries — How China's AI Labs Used Claude as a Textbook
- Open vs. Closed AI Stacks: A Deep Dive
- Enterprise AI Governance: Pre-Adoption Checklist
- DeepSeek V4 Release: What the Signals Mean
Series Guide
- Part 1 (2026-02-25): Methods & Tech — How it was done
- Part 2 (this article): Structure & Competition — Why it was possible (the gray zone of US–China AI rivalry)
- Part 3 (scheduled 2026-02-27): Regulation & Future — How AI model protection will change
Update Notes
- Content reference date: 2026-02-26 (KST)
- Update cadence: as significant legislative or regulatory developments occur
- Next scheduled review: 2026-03-15
References
- Anthropic official disclosure: https://www.anthropic.com/news/detecting-and-preventing-distillation-attacks
- TechCrunch: https://techcrunch.com/2026/02/23/anthropic-accuses-chinese-ai-labs-of-mining-claude-as-us-debates-ai-chip-exports/
- CNBC: https://www.cnbc.com/2026/02/24/anthropic-openai-china-firms-distillation-deepseek.html
- Rest of World: https://restofworld.org/2026/openai-deepseek-distillation-dispute-us-china/
- Vision Times (OpenAI letter): https://www.visiontimes.com/2026/02/13/openai-warns-congress-that-deepseek-is-illegally-distilling-us-ai-models.html
Execution Summary
| Item | Practical guideline |
|---|---|
| Core topic | The Distillation War: How Anthropic's Disclosure Revealed the Structural Anatomy of US-China AI Theft |
| Best fit | Prioritize for AI Ethics & Policy workflows |
| Primary action | Map data flows and identify personal data touchpoints before deployment |
| Risk check | Cross-check compliance against GDPR, CCPA, or sector-specific regulations that apply |
| Next step | Schedule a legal review checkpoint at each major system milestone |
Frequently Asked Questions
What problem does "The Distillation War: How Anthropic's Disclosure…" address, and why does it matter right now?▾
Start with an input contract that requires objective, audience, source material, and output format for every request.
What level of expertise is needed to implement deep-dive effectively?▾
Teams with repetitive workflows and high quality variance, such as AI Ethics & Policy, usually see faster gains.
How does deep-dive differ from conventional AI Ethics & Policy approaches?▾
Before rewriting prompts again, verify that context layering and post-generation validation loops are actually enforced.
Data Basis
- Scope: Anthropic official disclosure (2026-02-23), OpenAI Congressional letter (2026-02-12), cross-verified against 7+ major outlets including TechCrunch, CNBC, Fortune, Rest of World, and Reuters
- Evaluation axes: structural vulnerabilities (API model), cost asymmetry, enforceability, and geopolitical context - four dimensions of analysis
- Verification standard: only claims consistent across multiple sources stated as fact; analytical interpretations explicitly labeled as such
Key Claims and Sources
Claim:Anthropic explicitly linked distillation attacks to the necessity of AI chip export controls in its official disclosure
Source:Anthropic official disclosureClaim:Reuters reported evidence that DeepSeek trained models on Nvidia Blackwell chips despite export restrictions
Source:TechCrunch (citing Reuters)Claim:OpenAI submitted a letter to the US House Select Committee on Strategic Competition on February 12, 2026, warning of DeepSeek's distillation practices
Source:Vision Times / OpenAI letter
External References
Have a question about this post?
Ask anonymously in our Ask section.