Interview

SemiAnalysis on the AI power crisis: gas turbines, Meta's tent datacenters, and why training is still growing as fast as inference

Jan 9, 2026 with Jeremie Eliahou

Key Points

  • AI labs have flooded the US grid with roughly one terawatt in interconnection requests, but on-site gas turbines are the only near-term solution as nuclear and solar remain years away from meaningful scale.
  • Meta is deploying temporary tent-based data centers targeting six-month build cycles, abandoning its legacy design for speed as liquid-cooled GPUs force a shift to dedicated fluid cooling infrastructure.
  • Training workloads still represent the majority of AI power consumption and are growing at roughly the same rate as inference, contradicting assumptions that demand is shifting toward inference-heavy deployment.
SemiAnalysis on the AI power crisis: gas turbines, Meta's tent datacenters, and why training is still growing as fast as inference

Summary

The AI Power Crisis: Gas, Scale, and the Race for Megawatts

Jeremy of SemiAnalysis frames the core problem simply: AI labs are overwhelming the US grid with interconnection requests totalling roughly one terawatt, creating a prisoner's dilemma where speculative requests by every player further clog a queue that already cannot keep pace with demand. The mechanics are slow by design. Connecting a one-gigawatt data center to the grid requires a system-level study to ensure demand and supply stay synchronized, a failure mode that caused a nationwide blackout in Spain roughly a year ago.

On-site gas is the only near-term solution. Nuclear, solar, and battery storage are years away from meaningful scale. SemiAnalysis counts at least 12 manufacturers that have secured orders exceeding 400 megawatts for US data center gas power, well beyond the names most investors track. GE Vernova and Siemens Energy dominate the conversation, but the more instructive story is Doosan, the Korean industrial giant, which spent over a decade developing its turbine and is now supplying roughly two gigawatts to xAI, with timing that proved fortuitous. ProEnergy followed a similar seven-year R&D path before receiving approvals in late 2023.

Developing a new turbine takes seven to ten years, which means manufacturing capacity cannot simply be conjured. Bloom Energy fuel cells carry a higher per-unit cost than gas turbines, but the revenue per megawatt from AI customers is high enough that Bloom's factory payback period is actually shorter, reducing the financial risk of capacity expansion.

Bitcoin miners are among the unexpected beneficiaries. Because they already hold approved grid interconnections, substations, and on-site transformers, they sidestep the queue entirely. Retrofitting an existing crypto facility for AI workloads is straightforward compared to waiting years for a fresh interconnection approval.

How SemiAnalysis Tracks Capacity Before Earnings

SemiAnalysis builds its six-quarter forward forecasts primarily through systematic satellite imagery analysis across hundreds of data center sites, triangulating construction start dates with each operator's typical build cadence to project when capacity comes online. The firm also tracks third-party leasing commitments from operators including QTS, Digital Realty, and Equinix, and mines permitting filings across counties and states, automating portions of that process with AI agents where portal quality allows.

The methodology produced at least one notable early call: Amazon was widely perceived to be losing AI momentum to Azure and Google, but construction data showed AWS accelerating sharply in Q4 2024, a leading indicator that revenue growth would mechanically follow roughly four quarters later in 2025.

A computer vision model trained on satellite imagery is now being developed to detect construction status in real time, adding a further layer to the forecasting process.

Construction Speed Has Become a Strategic Variable

Meta's evolution illustrates the shift. Its original H-shaped data center design, optimized for energy efficiency and a best-in-class PUE, took roughly two to two and a half years to build. A subsequent rectangular design compressed that to twelve to fifteen months. Meta's current approach uses temporary structures, effectively large tents, targeting a six-month deployment window. The trade-off is a return to air-to-air cooling with liquid-to-air sidecar units, a less elegant solution but one that prioritizes speed.

The pivot away from Meta's legacy design was partly forced by liquid-cooled GPU hardware. The original open-air cooling architecture cannot efficiently remove heat from cold plates attached directly to chips. The new tent-based facilities address this with dedicated fluid cooling infrastructure, though at higher cost.

Meta's Louisiana campus is targeting a first phase of 2.1 gigawatts, with individual buildings at 400 megawatts each. xAI's Colossus facility, frequently cited as a benchmark for speed, came online in roughly 122 days.

Google's Distributed Training Advantage

Google stands apart on data center strategy. Rather than concentrating everything on a single massive campus, Google has built the networking infrastructure, including proprietary long-haul fiber, to run training jobs distributed across multiple sites within a metro or even across a 100-mile radius. Jeff Dean publicly confirmed multi-data center training is operational at scale. No other hyperscaler has replicated this at comparable scale.

The practical advantage is site selection flexibility. Instead of hunting for a single 1.5-gigawatt parcel, Google can aggregate multiple 200-to-500-megawatt sites, connect them, and achieve the same effective compute cluster. On the sustainability side, Google's Intersect Power acquisition in Texas involves on-site solar and battery assets, though the facilities remain grid-connected rather than fully off-grid.

Training Is Not Giving Way to Inference

A common assumption that AI workloads are shifting from training to inference is contested by SemiAnalysis's data. Training still represents the majority of AI power consumption and is growing at roughly the same rate as inference. The incentive structure is straightforward: every lab has a financial motive to train the next model that unlocks next year's revenue, and no ceiling on useful compute has been reached.

For inference, OpenAI is the instructive case. Its most power-intensive products, deep research and thinking models, take minutes to respond, which eliminates the need for latency-optimized, metro-adjacent deployment. Large remote campuses are viable, and maximizing GPU utilization across a large pool remains the primary infrastructure objective.

Infrastructure Is Already Moving GDP

NVIDIA's annualized revenue run rate is approaching $250 billion. Year-over-year additions to US AI investment, spanning chips, data centers, and supporting infrastructure, are estimated to exceed $300 billion, against a US GDP base of over $40 trillion. Data center contribution is already a measurable component of the computer investment GDP subcomponent. Construction starts for hyperscaler self-builds are described as having surged dramatically from 2022 and 2023 baselines, with 2024 characterized as a large year, 2025 as exceptional, and 2026 and 2027 expected to continue accelerating based on current leasing commitments. OpenAI has tripled revenue; Anthropic has grown approximately 10x.