Interview

Khosla Ventures' Jon Chu: Meta doesn't need to beat OpenAI — being number two is worth billions

Jun 23, 2025 with Jon Chu

Key Points

  • Meta doesn't need to beat OpenAI to generate billions in value; finishing second in AI with a billion-plus user base and superior execution speed makes the company a credible contender.
  • A 1% improvement in Meta's ad-ranking model translates to roughly 1% of revenue, worth billions today and potentially tens of billions as the business scales.
  • Foundation model startups face brutal economics at current valuations; outside acquisition outcomes, competing with OpenAI's distribution, brand, and GPU capital advantage generates minimal VC returns.
Khosla Ventures' Jon Chu: Meta doesn't need to beat OpenAI — being number two is worth billions

Summary

Jon Chu, partner at Khosla Ventures, argues that Meta doesn't need to beat OpenAI to make its AI push worthwhile. Finishing second in a market this large generates billions in value. That framing shapes his read on the reported move of Dario Amodei from SSI to Meta. At Facebook's scale, a $5 billion talent bet is a rounding error if the prize is dominance in the next computing platform.

Chu points to Meta's credibility as a contender. It executes quickly as a fast-follower, copying Snapchat with Instagram Stories and outexecuting the original. Zuckerberg has a track record of taking asymmetric bets even when some fail. The AI bet carries meaningfully higher conviction than the metaverse did because AI already produces measurable commercial outcomes. A Windsurf field engineering contact told Chu about an enterprise planning to buy 2,500 Windsurf licenses because it would have 2,500 engineers instead of 5,000 thanks to the tool.

Meta's AI upside layers

Meta's AI opportunity breaks into three levels. First, new hardware products. The reported Jony Ive and OpenAI collaboration signals that wearable or ambient AI hardware is coming, and Meta already has supply chain and research depth to compete. Second, messaging engagement. Embedding a capable model inside WhatsApp and Messenger drives session time and ad views. Third and most structurally significant, ads ranking. A 1% improvement in Meta's ad scoring translates to roughly 1% of revenue, worth billions today and potentially tens of billions as the business scales. Chu has a portfolio company whose founder was a distinguished engineer at Meta building a foundation model for ranking that eliminates feature engineering entirely. He saw a second company that morning claiming similar results.

The SSI departure

Chu says he genuinely doesn't know Dan Gross's reasoning for the rumored move to Meta. His honest read is that SSI is a pure research lab with no product surface, and it's not obvious that's where Gross's interests or strengths lie. Meta offers the research scale of a top lab plus distribution to a billion-plus users, a combination that lets you feed product data back into model training at a speed no standalone lab can match. The inference about AGI timelines may be overread.

Memory as moat

Chu is skeptical of the "Plaid for memory" thesis. The idea that a neutral layer could port user memory across model providers assumes memory is a retention moat for whoever holds it, modeled on how Google's search data flywheel made its product self-reinforcing. But he sees no incentive for OpenAI or Anthropic to allow portability, just as Google never let users export search history to Bing. The idea has user appeal. It has no supplier-side logic.

Defensibility in the app layer

On whether AI application companies get commoditized by model providers or hyperscalers, Chu is impatient with the framing. The moats are workflow lock-in, community, open-source leverage, and data network effects. These are the same ones that have always mattered in software. The AI label doesn't change the underlying logic. Shallow GPT wrappers are already dying, and Chu is watching some technically deeper products get absorbed by Anthropic and OpenAI. The question is whether a startup can reach a defensible position before the model layer expands into its space.

Underwriting AI at today's prices

Chu has been priced out of nearly every AI research round he has tried to do this year, often by a factor of 2x. The only framework that holds at these levels is team quality plus opportunity size, which is what Khosla used when backing Sakana AI. KV's concentration in OpenAI rather than spreading across Anthropic, xAI, and Cohere was not a formally debated decision but emerged organically from conviction in the team and compounding ownership. Outside of acquisition outcomes, it is very hard to generate a standalone VC return competing with OpenAI given its distribution, brand, and capital advantage in GPUs.

LLM monetization

Chu expects experimentation across every monetization model for consumer LLMs. Aravind Srinivas at Perplexity has already said publicly he plans to test ads. Whether subscriptions persist or the product drifts toward free with ad support is an open question. Chu says he genuinely cannot answer it. The cost curve is still moving and the viable model will depend on which product surfaces attract enough sustained usage to support an ad inventory.