News

Deepseek's next model reportedly trained on banned Nvidia Blackwell GPUs despite US export controls

Feb 24, 2026

Key Points

  • A senior US official told Reuters that Deepseek's next model was trained using Nvidia Blackwell GPUs, which are subject to US export bans on advanced chips to China.
  • Deepseek's previous major model release underperformed, and analysts argue Chinese labs have closed the gap by copying US model outputs rather than achieving independent breakthroughs.
  • Scaling Deepseek to competitive impact requires massive inference infrastructure and distribution networks that the company may lack the operational depth to execute.

Summary

A senior US official told Reuters that Deepseek's next model, set for imminent release, was trained using Nvidia Blackwell GPUs despite the US export ban on advanced chips to China. How the chips arrived remains unclear—smuggling, shipment diversion, cloud provider fronting, or another workaround are all possibilities.

Deepseek's competitive threat is widely questioned. The company's last major model release underperformed expectations, possibly due to a failed pretraining run. More fundamentally, Chinese labs have only kept pace with US labs by training on US model outputs rather than achieving independent breakthroughs. Moving from ten years behind to three months behind reflects copying speed, not innovation speed.

Infrastructure constraints matter more than raw model capability. Even if Chinese labs can access commoditized training data, scaling Deepseek V5 to economic impact requires a massive inference cluster and distribution network. Whether Deepseek has the operational depth to execute at scale is unclear, especially given that Anthropic and OpenAI can tighten API security to slow knowledge transfer.

Deepseek's smaller models remain impressively capable. At the frontier level, where inference clusters and sustained competitive advantage matter most, doubt persists about whether Deepseek can sustain the pace.