Mercor's 21-year-old founder: $1M to $100M revenue in 11 months placing AI training talent at top labs

Mar 20, 2025 · Full transcript · This transcript is auto-generated and may contain errors.

Featuring Brendan Foody

can have opinion on liquid death. Yeah. Uh or Dollar Shave Club. You can kind of understand that. It's very tractable. Anyway, uh we have our first guest. There we go. Welcome to the te temple of technology. Brendan from Merkor. How you doing? What's going on? I'm doing great. Thanks so much for having me, guys.

I'm excited to be on the most profitable podcast. Yes. Here we go. Well, you're live. Uh can you introduce yourself? Uh break down a little bit about your history, your story, and what you're building, what your company does. Yeah. Usually when somebody gets to your stage, they've been in market for five, six years.

So people have more of an opportunity to get to know you. So yeah, we'd love to we'd love the full intro. Yeah. Yeah. Yeah, absolutely. So, I grew up in the Bay Area, met my co-founders at Dar and Syria when we were 14 in high school. Um, and so we were all in the speech and debate team together.

They were the winningest speech and debate team of all time in high school debate, but I was always building companies in one form or another. So, didn't want to go to college. Um, and my parents insisted that I had to.

And so I went to Georgetown for two years where Siri was my roommate there and Adar was at Harvard and we started Merur when we were 19 to train models that predict how well someone will perform on a job better than a human can similar to how a human would review a resume, conduct an interview and decide who to hire.

We automate all of those with LLMs. And then fast forward two years, we scaled from 1 to 100 million in revenue in 11 months. Uh and so we're only 21. uh and and running this really exciting company um that's working with most of the most prominent companies in Silicon Valley.

It's more of a more of a line than a curve over there. It's not really just a hockey stick. It's a How do you uh you're very you're very smiley right now. You look like you're having fun, but you're your LinkedIn profile is dead serious.

Uh, do you do you do you uh you know like I'm sure internally as a team you guys are focused on not letting the hype and the crazy initial traction get to your head. How do you think about running the team when the team is only experienced sort of massive sort of immediate success? Totally.

And I think one thing that compounds this is that so much of our team is like college dropouts and new grads, right? because it was the extension of our initial network at Harvard, Stanford, MIT, etc. U and so a lot of them haven't seen what normal companies look like. This is all that they're used to.

And it's easy to get caught up and lost in so much that's happening. But I think the most important thing to stay grounded is just focusing everyone on the numbers that matter long term, right? The network effects of the business. How do we build up the marketplace?

How do we learn from all the performance reviews that we're getting to build our usage data flywheel rather than focusing on a lot of the lagging indicators like revenue or um you know or more of the customer signals? Uh can you talk about some of the jobs that you're actually placing?

Like you go to Merkore, you get placed. Are you placing people in white collar jobs, blue collar jobs, everything? Any specific niches? Yeah.

So when we started out it was that we were very uh amazed with the caliber of talent in India and so we would hire these software engineers in India and use LMS to assess them, hire them for our friends.

But what we realized was that there was this really large shift happening in the human data market where large AI labs are hiring thousands of people to train the next generation of LMS. And it used to be this crowd sourcing problem that was super low skilled, right?

Of how do you get uh a bunch of people writing barely grammatically correct sentences for the early versions of chatbt that was moving towards this vetting problem of how do we find the most exceptional people in the world in high volumes that want to work exact work directly with researchers to push the frontier of model capabilities.

And so to your question, we now hire roles across almost all domains uh or the vast majority of popular domains ranging from software engineers to consultants, people in finance, medicine, law, etc. to podcasters both help more traditional customers as well as um these AI labs. Yeah. What about um uh robotics?

I mean there obviously there's this question of like a data wall in robotics. Uh Google had that famous like arm farm where they were trying to generate reinforcement learning data with like 17 robotic arms just working on grasping.

Um are are we going to see a future where people are getting placed into jobs kind of wearing motion tracking suits to generate robotics data? How do we solve that problem? That's a phenomenal question. I I'm not sure. I think so.

uh there's a lot of questions around like what kind of data production or most efficient robotics and I will say we don't uh provide people to create robotics data as much yet but it's certainly something that's top of mind as these companies start to mature and really scale up the kind of data that they're interested in.

What about uh yeah what about like revenue volatility?

I I feel like a lot of these like if you're if you're placing individuals into you know oh you know some big foundation model company or some big hyperscaler is doing a massive training run they need to really nail down math and they're going to bring in a ton of mathematicians generate a ton of training data and then they're like hey we're actually good we're moving on to the next thing that can kind of create a massive oscillation in your revenue how are you thinking about like scaling out and make making sure that the growth curve is smooth because obviously it's very fast.

But you know the the iron law of the universe is like what goes up quickly comes down quickly but then maybe there's a second act and you go back up again and over time it looks smooth but it can be very uh jostling in the in the next few years.

Yeah, I think the most important thing to think about is the leading indicators in fastmoving markets, right? in that if you look at the most sophisticated labs and what they're really investing in, it's this super highc caliber expert data that is like far beyond the model frontier of capabilities.

And so so long as we focus on like the leading indicators, the things that you know all the big tech budgets are starting to move towards that positions us really well. And obviously like the leading indicators evolve over time and we need to position ourselves there.

I think where companies get themselves into trouble in these fastch changing markets is when they aren't looking at the leading indicators and they're sort of sitting with the large incumbents that are doing a lot of the legacy systems that get phased out. Um, and so that's how I think about it.

But my broader take on the human data market is that it's going to grow dramatically because so we're getting to the point where RL is so effective that you can create almost any eval and it will be able to solve that eval.

And so the barrier to applying AI throughout the entire economy is just creating evals for everything, right? Which is a process that inherently requires humans in the loop. And so I'm very excited about building that world.

Uh what are some customers that you can talk about or maybe allude to that have been kind of surprising? We talked about yesterday uh Yum Brands, which is like a $44 billion public company that owns a bunch of fast casual restaurants.

they're partnering with Nvidia and like buying chips and like actually building effectively building their own um AI inhouse.

Has there been any sort of you you're obviously working with all the big sort of foundation uh labs or I would say most of them would be my guess, but has there been any customers that you're working with that you've been surprised by? Not too surprising yet. like most of our customer base.

We work with all of the top five labs in the US and we're starting to see maybe actually one interesting thing is we're starting to see a lot more customization at the application layer and I think a big reason for this is that it's now much more data efficient to customize models with RL environments and a lot of this like new kind of data versus fine-tuning data that people would do historically.

And so it's now becoming, you know, this like gold rush for all these application layer companies without too much capex. They're able to get these incredible results. And so we're seeing some of that, but at least for the most part, the like legacy Fortune 500s uh haven't really caught on to this yet.

And I imagine that that might take a year or two. Well, that'll be a nice rush of revenue once once they realize what's happening. Um I have a question.

Yeah, go for Uh yeah, we I mean we've been talking a lot about agents and kind of wondering uh obviously agents are breaking through in the enterprise and the coding sector, but uh we've just been kind of tracking against like when can you actually use an agent to just book a flight reliably and I'm wondering if we're in this weird thing where you see a foundation model company do a press release and it's like we're working on, you know, fundamental math innovation or we're going to solve, you know, uh I forget that that like really really hard problem in in math.

Um, and it and it seems like maybe there's a gap in the the human RL training loop just around like a really good executive assistant. Um, is is that the gap? Is it we need more data there to actually break through or is that just a product issue? Uh, like why can't Siri reliably book me a flight? Yeah.

So, I think there's two challenges. One is the interest of researchers has historically been in these like super hard reasoning problems, right? interested in GPQA of how do we solve PhD level reasoning?

They're interested in IMO level math of how do we have models that are uh beating all of these incredible mathematicians and have historically just been less interested in like how do we automate uh a Mackenzie consultant or an executive assistant and I think that shift is starting to happen and so that's like a big part of it.

Um, and that ties into the data that they create because of course if you want to automate a McKenzie consultant, you need evals for everything a Mackenzie consultant does. If you want to automate an EA, you need evals for everything that an EA does.

Um, I think the other part of it is the models are just starting to get really good at tool use. Uh, like tool use is still relatively immature relative to all the reasoning capabilities.

And so I think as that starts to happen, I would really expect that this year you're able to use operator or whatever the equivalent agent is to start booking flights and doing a lot of these more mundane tasks. Have you ever been approached by any of the labs uh around help us uh make our models uh funny?

Like uh is there a world is there a world is there a world where you guys are uh you show up to the comedy store here in LA and you're you're trying to you know grab the every time we try new. We're going to test how funny you are. We got we got some opportunities for you if you can if you can. The Joe Rogan mothership.

We just pull them right off stage. You know, the kill Tony bucket pulls. Hey, you did you got the joke book. Come on.

But it seems like that's like when when every model's like pretty good at some of these sort of like foundational like problems and thinking, then the way to differentiate is is can a model be more entertaining for consumers? Can and there's value in that itself. Yeah.

A lot of our customers at the frontier, as have you you've seen in recent releases, are starting to think a lot more about humor and and these exciting things.

We have been hiring a bunch of people out of like the Harvard Lampoon and equivalent places that have these comedic skills to help teach models uh how to get there. And so it's really like every capability you want, you need evals for.

Uh we we uh John has been doing like the Kugan eval which is just he asks you know various models to to tell a joke and then they end up just taking you on this winding thing that sounds like a joke but it's not actually it it doesn't actually have a punchline.

I mean it's hard because there's not there's not a quantitative eval for humor. It's so subjective and so qualitative. Another uh question just related just broadly to the future and how these labs so you know the labs are your customers.

Uh the critique of the labs broadly and the model companies as they raise tens of billions of dollars has been you know if the models are commoditizing and intelligence just becomes too cheap to meter where does the value come from?

John's point of view has been uh there are many commodities in the world that have a ton of value and there's a bunch of great businesses that you know drill and you know produce oil and then sell it right even though oil is like a commodity that just sort of trades on the open market.

How do you how do you see uh the the the model market long term and do you see a world where there's constantly new companies being spun up to create these sort of bespoke specialized models for different use cases? Yeah, I think the value will acrue to the product layer.

Like I don't think about OpenAI as much as an API business and it it seems like most people are really placing a bet on the product side of things which will start to get very exciting especially and spread out across many companies considering that customizing models is so much more affordable than it used to be.

Um so that's sort of how I see it playing out. I think one of the things that a lot of investors don't quite realize when they're not like in the code of building these products is how low the switching costs are for API, right? Like it's a line of code to switch back and forth and like see how new models are doing.

And so it's it's hard to build a business when when that's the switching cost. Um and and I think it's really going to be in the in the product layer.

Well, I mean, g given that, do you have a view on why every single ex open AAI employee seems to start the exact same foundation model company instead of doing something else?

I was really hoping for like a supplements company from Ilia or, you know, like a hair loss or something fun, something different, t-shirt company, but everyone just seems to be, hey, I I I'm going to do what I know. I've been building foundation models. I'm going to stick with it. Yeah, that's a good question.

I I think a lot of it ties to just like the ideology around AGI, right? This is the most important problem in the world and so how do we all race as fast as possible to get there rather than a lot of the you know unit economics and competitive dynamics that investors would be thinking about. Yeah. Yeah.

So they're basically just disregarding the fact that the like sure the value might acrue to the application level layer for 10 years and then it's completely irrelevant once as ASI arrives. Yeah. Yeah. That's an interesting strategy.

If value in the short term short to medium term occurs to the product layer but then if you actually can build artificial super intelligence then none of it actually potentially matters. Yeah. In the end. Um, so it's an interesting strategy. Uh, la last question for you. I know we have a minute left.

Uh, this is the most uh champagne problem that founders ever have. And when you were fundraising, you probably couldn't even drink champagne, but do you have any funny stories? I'm sure you raised a series of like very competitive rounds.

You have any funny stories of like offers investors made to just like try to get you? You know, the classic is like, "Didn't you buy a helicopter? " Like the classic helicopter, but a private jet to Vegas to race Ferraris. There you go. There we go. I knew there was going to be something.

You're like, I don't I never actually got my driver's license. They're like, it's fine. It's a private track. Are those effective? Do they win you over? It sounds like you went with benchmark after the helicopter thing, so it's effective. I know.

could they could be very tempting because we we never actually for our series A and our series B we didn't create a slide deck because we weren't uh both times like people asked for it and we were like just no not not really willing to create one but uh the the sales processes can be a lot of fun.

It's sort of like Christmas where you're getting all of these like gifts and fun experiences. Uh but also trying to balance that with it not being too much of a distraction from building the company. Well, that's fantastic. Congrats to you. Good luck and we'll have to check in soon when you have new fundraising news.

Absolutely. Every week. Every other week. Every other week. Let's get on schedule. Yeah. Same time next week for the series F. Right. Okay. Cool. We'll we'll make it happen, John. There we go. All right. Give our best to the team. Thanks for coming on. Bye. Thanks. See you guys. This is great. That was a fun one.

Oh, to be 21 again. Yeah, I know. 21 getting flown out to Vegas to fly Ferraris or fly helicopters and race Ferraris. Uh, that's very TBccocoded. Um, well, we got David Center coming on the show. Gonna ask him about Nvidia. Gonna ask him