Tae Kim on Nvidia's culture, the TPU threat, and why Blackwell demand is 'bonkers'
Dec 2, 2025 with Tae Kim
Key Points
- Nvidia's networking segment surged 162% year-over-year, signaling Blackwell GPU demand will be 'bonkers' over the next two to three quarters as data centers typically buy networking six months ahead of GPU deployments.
- Google's TPU threat is overstated; Morgan Stanley estimates TPU shipments actually declined in 2025, while Nvidia's backwards-compatible architecture and CUDA developer lock-in keep enterprises and hyperscalers from switching.
- Nvidia's NVL72 server, a 72-GPU unit priced at $3 to $4 million, marks an inflection point; revenue accelerated to 62% year-over-year growth despite China restrictions, driven by this server going into volume availability.
Summary
Tae Kim, author of The Nvidia Way and a senior writer at Barron's, makes a straightforward case: Nvidia's competitive position is stronger than the current media narrative suggests, and the numbers back it up.
Culture as competitive advantage
Kim describes Nvidia's internal culture as blunt to the point of being unusual. Jensen Huang will publicly dress down executives when things go wrong, which Kim argues creates an agility that large competitors like Intel and Google structurally can't replicate. Where those companies require sign-offs from five executives before moving, Jensen makes a call and the company moves. The meritocracy runs deep — Huang has spent 30 years asking employees who the smartest person they know is and then going to recruit them. Dwight Diercks, still one of Nvidia's most senior figures, was recruited exactly that way. The result is a core leadership team where many executives have stayed 25 to 30 years — people who could have retired a decade ago but still work 80 to 100 hours a week.
The TPU threat is overstated
The TPU narrative, Kim argues, has the same shape as every prior Nvidia bear case — the H100 transition, China restrictions, ASIC competition, DeepSeek — all of which played out during the stock's 10x run. Google's TPU has existed for a decade and has been commercially available since 2018. Morgan Stanley estimates TPU shipments actually declined in 2025, and Nvidia GPUs took more share than TPUs at Google Cloud this year.
The Ironwood specs, announced in April, looked impressive on paper. But Semi Analysis followed up with a less bullish read: TPU V8 is unlikely to be a step-change improvement, partly because Google has lost significant chip team talent. Meanwhile, Nvidia's Vera Rubin arrives at the end of 2026 and is expected to be dramatically more capable. Kim's read on the Semi Analysis report is that most of the chip industry missed its actual implication — it was bullish for Nvidia, not bearish.
The customer analysis is blunt. Amazon won't buy TPUs; it has its own Trainium and isn't going to support a direct rival. Microsoft, the second-largest cloud provider, won't either. CoreWeave almost certainly won't. Enterprises and sovereign AI buyers default to Nvidia because millions of developers already know CUDA, and switching to JAX and TPU workflows requires elite software engineers that most organizations don't have. Nvidia's architecture is backwards and forwards compatible across two to three decades of tooling. A startup CEO Kim spoke to recently tried AWS training because the sticker price looked cheaper, hit crashes and reliability bugs, and gave up.
The Meta-TPU story — that Meta might spend a few billion dollars on TPUs in 2027 — amounts to less than 1% of Nvidia's expected revenue.
The 30% discount claim
Kim pushes back on the idea, attributed to Dylan Patel, that Sam Altman is effectively getting a 30% discount on Nvidia purchases by using TPU as a negotiating threat. Kim's view is that this conflates two separate deals. AMD gave OpenAI free warrants in exchange for chip purchases — a signed agreement. The Nvidia investment in OpenAI hasn't closed, and language in OpenAI's filings acknowledges it might not. If OpenAI doesn't follow through on AMD chip purchases, it doesn't receive the warrants. When Nvidia does invest, it receives equity in OpenAI — that's not a discount, it's an ownership stake. Kim is skeptical the math works the way the discount framing implies.
China: 50/50
Kim's preferred policy is keeping China on the Nvidia stack one generation behind the current state of the art — enough access to generate roughly $50 billion a year in revenue that funds R&D, not enough to close the capability gap. Nvidia already has 95% market share globally; a full export ban hands $50 billion in oxygen to Huawei and domestic Chinese chip companies. Kim was more optimistic six months ago but now puts the odds of the Chinese market reopening in the next 12 months at roughly 50/50, citing the whipsaw of the Trump administration's H20 ban reversal and China's subsequent refusal to purchase anyway.
The striking data point is that Nvidia's revenue actually accelerated without China. Revenue growth went from 56% to 62% year-over-year on $57 billion in the most recent quarter — the first acceleration in two years — driven by the NVL72 AI server going into volume availability. The NVL72 is a meaningful step up: 72 GPUs, 144 dies, one and a half tons, 5,000 cables, priced at $3 to $4 million per unit, compared to an 8-GPU prior generation server. Kim calls it Nvidia's iPhone 3G moment.
Demand signal
Nvidia's networking segment was up 162% year-over-year. Kim flags this as a leading indicator: data centers and neoclouds typically buy networking infrastructure six months ahead of GPU deployments, which implies Blackwell GPU numbers will be, in his word, "bonkers" over the next two to three quarters.
Corroborating signals are stacking up. In the September quarter, more data center capacity was leased than in all of 2024. Amazon and Microsoft raised capex every single quarter this year. Kim points to real enterprise productivity numbers as the demand driver: Cursor reporting 40% developer productivity gains, CH Robinson citing 40% improvement in shipment processing, Rocket Mortgage achieving an 80% reduction in paperwork processing costs. The reasoning model cycle is generating exponential compute demand, which is why hyperscaler capex has only gone one direction.
OpenAI's code red
On OpenAI's internal alarm over model competitiveness, Kim's read is that the company over-rotated toward reasoning models — o1 and o3 — at the expense of pre-training. Reasoning has been the single biggest accelerant of AI compute demand in the past year, so the focus wasn't wrong, but Gemini 2.0 and Claude Opus demonstrated that pre-training still yields large gains. OpenAI is now going back to that. The broader concern Kim raises is strategic distraction: OpenAI is simultaneously trying to build consumer apps, compete in AI infrastructure against Microsoft and Oracle, and develop its own chips to compete with Nvidia — all while its core partners are the companies it's now competing with. Kim's view is that this is premature, and that OpenAI's leverage comes from building the best model, not from fighting on infrastructure.