Meta in talks to build $200B AI data center campus — 20x larger than its Louisiana project
Feb 26, 2025
Key Points
- Meta is in talks to build a $200 billion AI data center campus requiring 5-7 gigawatts of power, dwarfing its $10 billion Louisiana facility and positioning Zuckerberg to acquire distressed AI startups with guaranteed compute access.
- Meta's 2025 capital expenditure guidance of $60-65 billion, up 70 percent year-over-year, reflects Zuckerberg's willingness to build excess infrastructure before demand materializes, a strategy that paid off when the LLM boom arrived.
- Adam Mosseri told staff Meta is waiting on prior Nvidia orders while uncertain whether it needs drastically more or less capacity post-DeepSeek, signaling genuine confusion about AI compute requirements going forward.
Summary
Meta is in talks to build a $200 billion AI data center campus—20 times larger than its planned Louisiana facility—as Mark Zuckerberg prepares the company for a multiyear surge in demand for generative AI. The proposed campus, first reported by The Information, would dwarf Meta's existing data center plans and ranks among the largest infrastructure projects of its kind.
The Louisiana facility, which Zuckerberg discussed last month, costs about $10 billion and spans roughly four miles. The new campus would require 5 to 7 gigawatts of power. For context, that power consumption is now the standard reference point: OpenAI has planned to acquire 8 gigawatts for its Stargate project by 2030, while Microsoft's entire Azure cloud business globally consumed around 5 gigawatts at the end of 2023. A single gigawatt powers approximately 750,000 homes.
Meta's capital expenditure guidance for 2025 stands at $60 to $65 billion, up nearly 70 percent year-over-year. The scale of spending reflects Zuckerberg's willingness to make aggressive long-term infrastructure bets without waiting for demand to materialize fully. When Meta needed to match TikTok's AI-driven feed personalization several years ago, Zuckerberg famously ordered not one but two reels-sized data centers so the company wouldn't be caught flat-footed again. That surplus capacity proved valuable when the large language model boom arrived, allowing Meta to train and deploy Llama independently.
The proposed $200 billion campus appears designed with multiple revenue scenarios in mind. It could serve demand from chatbots deployed across Meta's apps—Instagram, Facebook, WhatsApp—or power video generation, or enable entirely new AI products Meta hasn't yet launched. The point is flexibility: if Zuckerberg needs to spend on raw infrastructure rather than guessing which specific product will drive demand, he de-risks the bet.
Zuckerberg's speed benchmark is Elon Musk's xAI. According to the discussion, leaders at both Meta and OpenAI grew concerned about how quickly xAI stood up a data center in Memphis, Tennessee last year. That pace is now the competitive bar. The data center execution speed has become a metric by which all AI infrastructure ambitions are measured.
There is material uncertainty baked into the plan. Adam Mosseri, Instagram's head, told staff this month that Meta is waiting on Nvidia chip orders placed last year. More significantly, he flagged that Meta may need "drastically more or drastically less capacity than we thought to build frontier models." That language signals genuine confusion about post-DeepSeek AI requirements. DeepSeek's efficiency gains created algorithmic gains that could reduce the absolute compute required to train competitive models, but it's unclear whether those savings are permanent or one-time improvements that will later compound with scale.
The risk to Nvidia from statements like Mosseri's is real. If companies over-commit to chip orders and then realize they need far less capacity, those chips flood secondary markets at depressed prices, and Nvidia's direct revenue suffers. If companies under-commit and then need to scramble, the shortage works in Nvidia's favor. Meta is essentially saying it doesn't yet know which scenario is unfolding.
Zuckerberg is also positioning Meta as an independent AI power capable of acquiring distressed AI companies. If M&A returns in scale, the hosting of a $200 billion data center campus becomes a weapon: he can tell founders of money-losing AI startups with product-market fit that they'll gain access to compute and infrastructure they could never build alone while retaining operational autonomy. Character AI, among others, could fit naturally within such a structure.
The ambition underscores a strategic pivot. Meta's rebrand away from the Facebook name now reads as intentional: Zuckerberg is signaling that he's building a parent company across products and modalities, not betting everything on a single app. That hedge becomes more valuable as each product—Instagram, WhatsApp, Threads—faces generational turnover. A massive compute infrastructure can outlast any individual platform.