Interview

NVIDIA beats earnings estimates as Doug O'Laughlin flags GPU shortage and power bottleneck as the next big constraint

Feb 25, 2026 with Doug O'Laughlin

Key Points

  • Nvidia beat Q4 earnings with $67.4B in revenue and guided to ~$75B next quarter, but TSMC's full allocation across GPU nodes means supply, not demand, now constrains AI infrastructure growth.
  • CPU shortage driven by reinforcement learning workloads, new applications, and a five-year replacement cycle hitting simultaneously is disrupting major cloud platforms and makes Intel the swing supplier as TSMC maxes out.
  • If AI deployment reaches expected scale, productivity gains could trigger deflation faster than policy can respond, while CEO reluctance to announce AI-driven layoffs masks real labor market pressure building beneath the surface.
NVIDIA beats earnings estimates as Doug O'Laughlin flags GPU shortage and power bottleneck as the next big constraint

Summary

Nvidia beat fourth-quarter earnings with $67.4B in revenue against estimates of $66.2B, and management expects to guide to roughly $75B next quarter. Growth will not be constrained by demand but by the physical infrastructure to deploy AI at scale.

Doug O'Laughlin identifies two interrelated bottlenecks. TSMC is completely allocated across Nvidia's product lines. All CoWoS capacity, all N3, and all N2 nodes are accounted for, leaving Nvidia dependent on TSMC's ability to expand, which takes time.

The second, and potentially more acute problem, is CPU shortage. Three demand drivers are colliding at once. Reinforcement learning workloads require massive CPU capacity to simulate environments at scale for agentic systems. New applications coming online require infrastructure to run them. Most enterprise CPU purchases happened in 2020 and 2021 on a five-year depreciation cycle. Data centers have deliberately avoided new CPU purchases to redirect spend toward GPUs, but now that cycle is hitting and demand for replacement capacity has hit the market at exactly the wrong time.

The result shows up in outages. GitHub's uptime has dropped below 90%, YouTube went offline recently, and public cloud platforms are experiencing unusual instability. O'Laughlin attributes this partly to rushed deployments but notes the CPU squeeze is real enough to disrupt production systems.

This reshapes the foundry picture. TSMC cannot absorb CPU demand on top of GPU demand, leaving Intel as the swing supplier. O'Laughlin argues Intel becomes the real winner because TSMC is locked up and CPUs that cannot be made there migrate to Intel. Nvidia will secure its own GPU capacity from TSMC, but the broader ecosystem—Amazon Graviton, Google's Axion project, custom ARM processors—competes for Intel and Samsung foundry space.

O'Laughlin raises a deeper concern about deflation. If AI deployment reaches the scale the market expects, productivity gains will arrive so fast and cheap that they broadcast deflationary pressure across the economy before policy can respond. Software engineer job postings have actually risen year-over-year even as AI coding tools mature, suggesting the near-term effect is productivity acceleration rather than unemployment. New entrants and career changers face a much tougher environment as existing engineers use AI to generate more output rather than hiring additions.

O'Laughlin notes a political dimension. No CEO wants to publicly announce layoffs because of AI, both for reputational reasons and because it invites backlash. Layoffs lag decisions by weeks or months, so the gap between AI capability and visible labor market impact obscures the real trajectory.