Commentary

Chip export controls are failing — Sebastian Mallaby argues the US should negotiate an AI non-proliferation treaty with China

Apr 13, 2026

Key Points

  • US chip export controls on China are failing because Chinese developers rent AI compute capacity in Southeast Asia and use distillation to reverse-engineer American models, making containment impossible.
  • China and the US are now roughly level in AI capability, with Chinese companies leading in industrial applications like autonomous vehicles and infrastructure monitoring.
  • Sebastian Mallaby argues the US should negotiate a bilateral AI non-proliferation treaty with China modeled on the 1968 nuclear accord, noting Chinese labs care about AI safety despite skepticism about geopolitical feasibility.

Summary

Chip Controls Are Failing. The US Should Negotiate an AI Non-Proliferation Treaty Instead

The US strategy of blocking China's access to advanced AI chips is collapsing, and the Biden administration missed a more promising path: negotiating a bilateral AI safety accord modeled on the 1968 nuclear non-proliferation treaty. That's Sebastian Mallaby's argument in a New York Times op-ed based on recent reporting trips to Chinese tech hubs.

How China Is Circumventing Controls

China's tech sector is too sophisticated to be stopped by semiconductor export restrictions. The controls have failed because Chinese developers don't need chips on Chinese soil—they simply rent capacity on AI data centers in Southeast Asian neighbors like Singapore and Malaysia. Concealing a model's Chinese origin is straightforward. Huawei is also manufacturing domestic chips like the Ascend series that are less powerful individually but can be stacked together to marshal comparable computing power. The company can absorb the energy costs: China has nuclear plants, solar capacity, and massive infrastructure investment backing its data centers.

Additionally, Chinese rivals leverage distillation—reverse-engineering cutting-edge US models and building copycat versions quickly. Every time an American lab releases a model, Chinese competitors have a fast-follow playbook ready.

The Singularity Promise Never Materialized

American AI scientists once argued that a follower nation being months behind didn't matter because recursive self-improvement would soon kick in: better AI creating better AI, driving performance skyward to a winner-take-all singularity. Three and a half years into the Biden chip controls, that feedback loop hasn't happened. The accelerating power of leading models won't determine the race. Deployment will.

China and the US are now roughly level in the AI contest. Top Chinese models lag by a few months, military applications are hard to assess, but on industrial applications China seems to be leading. Huawei and Hikvision are rolling out AI systems managing mining operations, checking high-speed train maintenance, and scanning water samples for pollution. Mallaby rode in an autonomous car at Huawei's Shenzhen campus and found the steering "immaculate."

The Case for Negotiation

Mallaby argues the cost of failed controls is too high. Instead of doubling down on containment, the US should propose an AI equivalent of the nuclear non-proliferation treaty—a global pact requiring all countries to adopt universal safety safeguards on AI technology. The pitch would be: "You are a tech superpower. We are a tech superpower. Let's work together to make sure AI doesn't fall into the hands of rogue states and terrorists."

His reporting in Beijing, Shanghai, Shenzhen, and Guangzhou revealed something unexpected: China's AI elite does care about AI safety. Conversations with over a dozen lab leaders and executives suggested the Chinese government and Communist Party are less accelerationist than their US counterparts on the issue. (The dynamic is worth scrutinizing—countries express caution when they're behind, and acceleration when they're ahead.)

The counterargument Mallaby addresses is whether China would actually collaborate. He points to precedent: the nuclear non-proliferation treaty came just six years after the Cuban Missile Crisis. The US has switched from confrontation to détente before.

An Open-Source Liability

Mallaby visited a prominent Chinese tech company building an open-source foundation model. The CEO made a striking admission: as AI becomes more powerful, continuing to open-source it would be reckless. "You wouldn't open-source a nuclear weapon," the executive said. The OpenClaw model illustrated the concern—ordinary Chinese citizens downloaded it eagerly, but researchers and industry leaders were alarmed. A business school professor told Mallaby it "makes your computer naked." China's government soon discouraged OpenClaw use in government systems and warned citizens of data risks.

The Skepticism

Pushback in the segment centers on whether Chinese labs are actually as close as Mallaby suggests. One analyst argues Chinese researchers are further behind, and the only reason they appear close is distillation. The chip controls, by this reading, are working—they're slowing China down, not failing. Enforcement could also be tighter. Super Micro's recent stock surge reflects concerns about smuggled chips, and tighter policing could tighten further.

The bigger skepticism is geopolitical: effective multinational collaboration on AI seems implausible given current tensions, particularly in the Middle East. The "good ending" scenario—where all parties agree to slow down—requires something to crack that hasn't cracked yet.

Every deal, every interview. 5 minutes.

TBPN Digest delivers summaries of the latest fundraises, interviews and tech news from TBPN, every weekday.