SemiAnalysis's Doug O'Laughlin: Amazon's execution edge is why OpenAI chose AWS, space data centers are hype
Dec 17, 2025 with Doug O'Laughlin
Key Points
- OpenAI chose AWS over competitors because Amazon can deliver 5 to 6 gigawatts of data center capacity in 2026 while Oracle, Coreweave, and Flexential face construction and power delays.
- Amazon will not share commerce revenue with OpenAI despite its $60 billion advertising business, protecting its direct-search moat unlike smaller platforms such as Etsy.
- Space data centers are a fundraising narrative rather than viable business, since launching hardware to orbit costs 10 times more than ground deployment while facing the same operational complexity.
Summary
Amazon's execution advantage in data center delivery is the core reason OpenAI chose AWS as a strategic partner, according to Doug O'Laughlin of SemiAnalysis. While competitors including Oracle, Coreweave, and Flexential have faced construction and power delays, Amazon is on track to bring approximately 5 to 6 gigawatts of capacity online in 2026, a figure O'Laughlin describes as a decisive competitive gap. OpenAI's chronic need to secure more compute, combined with Amazon's unmatched infrastructure execution, made the pairing logical on first principles.
The Amazon-OpenAI Deal
The deal involves a $10 billion investment from Amazon into OpenAI alongside approximately $3.837 billion in AWS compute commitments over multiple years. O'Laughlin had flagged the likelihood of an OpenAI announcement at AWS re:Invent in premium SemiAnalysis research, citing Amazon's power availability as the primary draw. OpenAI's appetite for fresh capital from any available source, including Disney, SoftBank, and now Amazon, is a secondary but reinforcing factor.
AWS's infrastructure edge over Azure is not a recent development. O'Laughlin characterizes the historical gap between the two as junior varsity versus a different league entirely, built on AWS being a fully vertically integrated, battle-tested operation with a multi-decade head start. Microsoft pulled back on data center investment during its 2024 pause while Amazon pushed forward, and those long-term commitments are expected to bear fruit meaningfully in 2026 and 2027.
Trainium and Multi-Cloud Strategy
Trainium 3 is expected to represent a significant step up from Trainium 2, partly because Amazon has incorporated direct input from the major labs during development. A notable talent signal: a significant portion of the engineers who built Google's TPUs have moved to Amazon's Trainium team.
How OpenAI will actually deploy Trainium remains unclear, but Anthropic offers a working blueprint, running workloads across GPUs, TPUs, and Trainium simultaneously. O'Laughlin sees two viable approaches: concentrate entirely on one chip for maximum optimization, or diversify across hardware to improve negotiating leverage and reduce gross margins. The multi-hardware path is operationally complex but strategically valuable, particularly as different model architectures show better total cost of ownership on different hardware types depending on whether they are memory-bound or FLOPS-bound.
The recent AWS-GCP direct interconnect reflects a broader shift. Historically, cloud providers weaponized egress and ingress fees to lock data inside their ecosystems. As hyperscaler customers have grown large enough to demand flexibility, providers are now opening the door rather than charging to use it.
Amazon Will Not Share Commerce Revenue with OpenAI
O'Laughlin is unequivocal that Amazon will not extend any commercial revenue-sharing arrangement to OpenAI in the way smaller platforms like Etsy have engaged with AI search. Amazon's $60 billion-plus advertising business is built on users landing directly on Amazon to search for products, and that moat will not be voluntarily handed to a partner. Weaker platforms have more to gain from such arrangements; Amazon does not.
Space Data Centers Are a Narrative, Not a Business
O'Laughlin is a self-described hater of the space data center thesis. The argument is simple: data center construction is already plagued by delays and power failures on Earth, and launching hardware into orbit costs roughly 10 times the per-unit price of deploying it on the ground. A 1-ton GB200 cluster that is expensive and complex to operate terrestrially does not become more viable in low Earth orbit.
The timing of the space data center narrative is not coincidental. O'Laughlin connects its emergence directly to SpaceX's reported $800 billion valuation round, with discussion of a subsequent target valuation of $1.5 trillion. Investors excited about the round have an incentive to promote a use case where SpaceX is the only credible launch provider, giving it a monopoly on any future space-based compute TAM. If space data centers ever become real, O'Laughlin concedes SpaceX would own that market entirely, but he sees the current narrative as primarily fundraising infrastructure.
Meta's AI Distribution Advantage
Meta's 3 billion daily active users give it a distribution on-ramp that no other AI company can replicate, particularly across Southeast Asia and other international markets where WhatsApp functions as a primary commerce and communication layer. Meta recently removed ChatGPT from the WhatsApp ecosystem, a move O'Laughlin flags as a meaningful signal of how aggressively the company intends to consolidate AI engagement within its own products.
The bottleneck for Meta is not reach but retention. O'Laughlin argues Meta can mechanically expose tens of millions of users to its AI products quickly, leveraging its proven ability to engineer highly engaging feed content. The harder execution challenge is converting those trial users into daily active AI users who embed the product into their routines, which is what the personal superintelligence positioning ultimately requires.