Mercor's 21-year-old founder: $1M to $100M revenue in 11 months placing AI training talent at top labs
Mar 20, 2025 with Brendan Foody
Key Points
- Mercor grew from $1M to $100M revenue in 11 months by placing domain experts in software engineering, finance, medicine, law, and consulting at the five largest US AI labs to generate training data.
- The startup pivoted from placing Indian software engineers when it recognized that AI training shifted from low-skill crowdsourcing to high-skill data generation requiring specialized expertise across industries.
- Founder Brendan Foody, 21, raised Series A and B without a pitch deck, betting that value in AI accrues at the product layer where switching costs are near-zero, not at the model API level.
Summary
Brendan, co-founder and CEO of Mercor, is 21 years old and has built a business that grew from $1M to $100M in revenue in 11 months. He started the company at 19 with two co-founders he met at a high school speech and debate program in the Bay Area — one was his roommate at Georgetown, the other attended Harvard. Mercor uses LLMs to automate resume screening, interviews, and hiring decisions, with the stated goal of predicting job performance better than a human recruiter.
The growth story is anchored in a specific market shift. Mercor started by placing Indian software engineers with US companies, but pivoted when it spotted a structural change in the AI training data market. What was once a low-skill crowdsourcing problem — getting people to write barely grammatical sentences to train early versions of ChatGPT — has become a high-skill vetting problem. The frontier labs now need domain experts in software engineering, finance, medicine, law, and consulting to generate the data that pushes model capabilities forward. Mercor places workers across all of those domains and says it works with all five of the top US AI labs.
Revenue volatility
The business model carries real concentration risk. A single lab running a math-focused training run can bring in a surge of mathematicians, then wind it down. Brendan's answer is to track leading indicators rather than revenue — specifically, what the most sophisticated labs are investing in next — and to stay away from legacy systems that get phased out as the market moves. His broader thesis is that demand for human data will keep growing because reinforcement learning can now solve almost any eval, which means the bottleneck has shifted to creating evals across the entire economy, a process that inherently requires humans.
Where the customer base is heading
Beyond the frontier labs, Brendan sees application-layer companies beginning to fine-tune models using RL environments rather than traditional fine-tuning data, and doing so cheaply enough to move without major capex. Fortune 500 companies haven't caught up yet — he estimates that takes another one to two years.
On the question of why agents still can't reliably book a flight, Brendan argues there are two gaps: researcher attention has been concentrated on hard reasoning benchmarks like PhD-level GPQA and IMO-level math, leaving mundane task automation underfunded in terms of eval creation; and model tool use remains immature relative to reasoning. He expects both to close within the year, putting reliable consumer-grade agents within reach.
Model commoditization
Brendan is direct about where value accrues in the AI stack: the product layer, not the API. Switching costs at the API level are essentially zero — changing models is a single line of code. He believes labs raising tens of billions to build foundation models are driven more by AGI ideology than by unit economics, and acknowledges the implicit bet: if ASI arrives, the product-layer advantage may not matter anyway.
Fundraising
Mercor raised its Series A and Series B without a pitch deck — both times investors asked for one and were told no. Benchmark led at least one round, and the competitive process involved a private jet to Las Vegas and Ferrari racing on a private track. Benchmark is the named investor based on Brendan's comments in the conversation.