Interview

Will Brown breaks down Meta's AI talent raid and what Zuckerberg's Superintelligence team actually means

Jun 30, 2025 with Will Brown

Key Points

  • Meta is recruiting roughly 20 senior researchers who have shipped models end-to-end, filling a gap distinct from Yann LeCun's FAIR lab, which produces foundational research insulated from product timelines.
  • Meta stock hits all-time highs as Wall Street views AI as more monetizable than the metaverse's $20 billion annual burn, with advertising and embedded video tools as the core opportunity.
  • OpenAI's enterprise strategy locks in customers through bespoke fine-tuning for $10 million spenders and self-serve options on o4 mini, defending against open-source competitors before rivals establish footholds.
Will Brown breaks down Meta's AI talent raid and what Zuckerberg's Superintelligence team actually means

Summary

Meta's talent offensive is not about headcount — it's about a specific capability gap. The roughly 20 researchers Zuckerberg is targeting are senior figures who have navigated full research-to-product cycles, the kind of people who can take a pre-training run through to a deployable model. That profile is distinct from what Meta already has in abundance: the academic researchers inside Yann LeCun's FAIR org, who write important papers but are explicitly not tasked with shipping competitive models. LeCun's group functions more like an industry research lab making 10-year bets — the same framing Zuckerberg applied to the metaverse — insulated from product timelines and quarterly model releases.

LeCun's position in the AI landscape is more nuanced than the 'non-believer' meme suggests. He is directionally correct that full AGI capable of replacing a human worker indefinitely has not arrived, and that more fundamental science remains. But his org's separation from product execution is a structural reality, not a strategic choice he controls. Llama was built by an entirely different group at Meta, and LeCun had no involvement.

The incoming hires — including Alex and reportedly Nat and/or Daniel — appear to be taking on roles that bridge research and the commercial layer, handling what one description called 'producty business stuff.' The target number Zuckerberg was reportedly pursuing was somewhere between 100 and 250 people, with the confirmed list sitting around 20 names.

Meta shares hit an all-time high during the segment, up roughly 5% over the prior week. From a Wall Street framing, even if the reported $100 million individual offer figures are accurate, the total outlay — perhaps $1 billion across all hires — is about 5% of what the metaverse buildout was burning annually, which at its peak was weighing on the stock at approximately $20 billion per year. AI is seen as structurally more monetizable upfront, which changes the risk calculus.

Meta's core product opportunity in AI is not B2B software — it is advertising and entertainment. The bull case is that Facebook is already arguably the most effective advertising platform ever built, and generative AI could make it two to three times more effective. On the entertainment side, the platform's potential to absorb a meaningful share of the video gaming and social storytelling market — particularly as tools like Runway's V3 become embeddable at the platform level — remains underappreciated. Current AI integrations on Instagram and Facebook have been awkward, and Meta has not produced a homegrown product innovation that stuck in several years.

OpenAI's position remains structurally strong despite the researcher departures to Anthropic, SSI, Thinking Machines, and xAI. The average ChatGPT user is not tracking researcher attrition. OpenAI holds the center of gravity in AI in roughly the same way Apple holds it in mobile — something would have to go seriously wrong on both the product and research fronts for that default status to shift. The o3 release inside ChatGPT is identified as the clearest recent inflection, with the model demonstrating genuinely generalist agent behavior, including the widely-circulated GeoGuessr performance.

OpenAI is also pushing an enterprise customization strategy: for customers spending over $10 million, the company offers bespoke model fine-tuning, with Palantir cited as an example-tier client. A more self-serve version enables fine-tuning on o4 mini for customers spending in the thousands of dollars, supported by forward-deployed consulting. This is partly a defense against open-source alternatives like Llama and partly a move to lock in enterprises before competitors like Thinking Machines establish footholds.

On the data and training market, the view is that raw token volume is hitting diminishing returns. The more valuable work — and the more compute-scalable work — is curating goals and objectives for reinforcement learning, not generating more pre-training text. This reframes the competitive landscape for players like Scale AI and challengers including Merkor, Labelbox, and Handshake. Scale is seen by some as having timed its exit well, though near-term gross revenue in the space remains substantial.

Llama's open-source commitment is not guaranteed. A plausible path has Meta following Google's playbook — maintaining the Llama brand and some open releases while shifting flagship model development to closed APIs or product-embedded deployments. A partnership with CoreWeave or Nvidia, which is moving into inference serving, is floated as an infrastructure route given Meta's current reliance on AWS.

On Stargate, the most likely near-term experiment is taking a large model — below the scale of GPT-4.5 but substantial — and applying significantly more reinforcement learning than has been publicly attempted, testing what an o5-level RL regime unlocks. RL is more distributable than pre-training but still benefits from colocation due to the need to continuously sync updated weights to inference workers. Much of Stargate's capacity will likely serve inference demand, not frontier training — rate limits across Google, OpenAI's o3 Pro, and others reflect how severely current infrastructure is constrained relative to user demand.