Sequoia's David Cahn: the AI talent arms race is just getting started — we're only in inning two
Jun 19, 2025 · Full transcript · This transcript is auto-generated and may contain errors.
Featuring David Cahn
guys. Thanks for having me. We'll talk to you soon. Cheers. Bye. Uh next up we have David Han from Sequoia Capital coming in. David Khan, first time first time on the show. Exciting. Coming in. Uh wrote a fantastic piece. I want him to break it down.
Uh would you mind kicking us off with a little bit of an introduction on yourself? Hey guys, good to see you. Yeah, I'm one of the partners. My name is David Khan. One of the partners at Squa. Uh excited to chat with you guys. Yeah, thanks so much for hopping on. uh kick us off with the new blog post.
What was the thesis? What inspired it? And then I'm sure we'll tie it to a bunch of news. So the new blog post was about AI companies or AI labs being more like sports teams. And of course, we all probably saw, you know, seeing the news around scale AI acquisition, some inspiration coming from that.
And then these rumors that we've been starting to hear over the last few weeks and finally now bubbled out over the last couple days into the public conversation around hund00 million signing bonuses. Huge amounts of money being spent on top AI talent.
Um, and for me, I mean, I write these pieces as I think about and learn about AI and uh what an exciting time that we're living through. And I'm I'm pretty fascinated by kind of the human dynamics of it all. There's like seven to 10 people at the top of these big tech companies.
They control, you know, the big Magnificent Seven are now a third of public market cap. They're extremely powerful and important. And I think there's sort of sometimes in AI this notion that AI is super abstract or these things are inevitable, but actually it's it's it's human dynamics.
It's sort of this game of 3D chess that's being played by these really fascinating individuals. And so, as you know, an observer on the sidelines, we all get to watch and see how this stuff plays out. And um I like to write about it as I as I think about it.
I posted on February 2nd, companies should do NBA style trade deals. I want to see OpenAI traded COO and CFO to Anthropic in exchange for their CMO, a cracked PM, and a couple of Waterlue class of 2026 new grads. Well, there's kind of this new draft dynamic, right?
Like every year there's kind of this new draft and as people see these big packages, probably all the Stanford kids these days want to be AI researchers and so there there is this notion of it getting refreshed. is is uh what is driving this?
Is it is it true AGI pilling at the top of these organizations where they think that, you know, it's going to be winner take all or it's going to be a a 10 trillion dollar market and so there's no amount of money that you can overinvest. Uh or is it just hey it's a more competitive dynamic?
Uh and sure uh we're we're a trillion dollar company so yeah spending $10 billion to move our market cap 1% is totally rational economically. like what do you think's driving this? And I want to get into the different cultures of the different Mag 7 because some of them don't seem to be doing this yet.
Meta Platforms had 63 billion of net income last year. So it's like is spending a quarter of that to like you know be a major player in the next wave worth it? They could have bought the Lakers six times over net income anyway. Well, yeah.
What what is your take on like on like the the ethos that's driving these bigger packages? Yeah, I think about this and when I write these posts, my frame of mind is I almost like put myself in the shoes of these people and I try to imagine what would I do, how would I think about it, what's the game theory of it. Yeah.
Um, and I think there's two things, right? I think one thing is kind of the revealed preference seems to be that they're AGI pilled. Like people can tell you a lot of things. I think you learn a lot more by watching the decisions people make. And I think the evidence suggests that they believe AGI is coming.
It's extremely important for these companies. It's sort of must win. And I think for Meta with these decisions, it's almost allin. we have to make we have to win.
And I think there's a second dynamic which is you can believe these things but you know we're all humans and I'm again fascinated by these kind of human dynamics and you can get caught up in an arms race right and as as humans we sort of we look at evidence and we we see evidence through a lens that we already have and oftentimes we we overemphasize reinforcing evidence and we underestimate evidence that disagrees with our point of view.
So you can imagine that three years in now to this sort of AI moment that started with chat GBT, you can imagine that people are really caught up in this and I think the arms race dynamics are something I wrote about in the piece and I've commented in the past with AI $600 billion question on the compute arms race dynamic and I guess it's now interesting to see two arms races.
First there was a comput arms race everyone kind of got a lot of arms, right? Everyone has a lot of GPUs now and now there's the talent arms race and uh everyone does not have equal talent, right?
And so now you're going to see this arms race and talent and everyone's talking about it, but I think we're still probably like inning two of this talent arms race because in any arms race when I up the ante, you have to respond and I think it would be it would be a fiction to assume that nobody's going to respond to this.
Who can who can respond at least on a from a dollar standpoint? Well, I I I want to talk about Apple because it seems like Apple has the money, but they seem like the least AGI pled of any organization, but their poor CEO barely makes he doesn't even crack 75 million a year. That's not You could make more.
You should become an AI become a a AI researcher and go to meta if he wants because I you know I think people these numbers are so big that they're kind of hard to grapple with. And so I was actually after publishing the piece I was like I wonder how much like Fortune 100 CEOs make. Yeah.
And I think you know an AI researcher is going to make four times the amount like the CEO of Coca-Cola makes. And it is kind of wild when you think about the economic. This is a totally new phenomenon in the scale of business. Yeah. Yeah.
And I mean it it it kind of begs the question like that the numbers are huge but the market caps of the companies are huge. And so the question is maybe not should the AI researchers be paid less.
It's like should should Apple be set up to pay Tim Cook a billion dollars a year so that he can confidently go out and hire a couple people at a hundred million or 50 million or 200 million and not feel like he's like the like the organization is like flipped from like a pyramidal standpoint like you're still at the top.
There's always a weird there's always a weird dynamic with uh you know a founder CEO who's taking a low salary and wants to hire a big shot and like can you really have a reporting too dynamic if you're making half as much as your direct report?
Well, the question is what's the marginal you know I think with any salary if you just think about it in pure economic terms right like what is the marginal benefit that you get from hiring this person on a sports team with a pivotal position you very clearly actually can understand kind of the economic rationale you understand sports licensing and the way that uh the way that these businesses make money hiring a star player actually does make economic sense for some of these franchises and then the other element is sports teams are owned by mega rich individuals for whom ownership of the sports team is more than an economic investment right?
Maybe they really care about the city. Maybe, you know, it's cool to to to own a sports team. And so, I wonder if some of those actual sports like dynamics play out here where question one, and I don't think we know this yet, is what is the marginal benefit of an AI researcher?
And again, the revealed preferences these organizations are telling us is if you're one of the 50 AI researcher who's going to get us to AGI, the marginal benefit is incredibly high, right? So, that's the revealed preference. And then second, if you have a team of allstars, what does that do for your company?
What does that do for your market cap? what does that do for the innovation inside of your company? So, I I don't think we know yet the economics of it. I think you can make the argument in favor and say, hey, it actually is economically rational.
This is the only thing that's going to matter if you increase the probability that we get to AGI by X%. That that is impactful. I also think you could make the counter argument and say, hey, everyone just wants to have the team of all stars. It's not actually economically rational.
CEO pay, by the way, is linked, you know, there's a lot of criticism of CEO pay historically, right? that COPA is functionally what is the replacement cost of this individual? What is the marginal benefit to the corporation?
And there's a lot of brain damage that's gone into comp committees on public companies on how much they should get paid. Right? They're not arbitrary numbers. And this is more out of thin air, right? This is more a new experiment. And so we're going to see uh if it is economically rational or not.
But regardless of whether it's economically rational, it is self-perpetuating. If one company is offering everybody this amount of money and you're in an arms race, everybody's gonna have to respond. Yeah.
Have you have you or anyone on on the team comped this to what's happening in high frequency trading or on Wall Street?
Because uh there's an interesting dynamic there where if a if a you know high frequency trader comes in and sets up some trading strategy that could produce a hund00 million in profit basically in perpetuity. Uh but then if they leave they can't take that code or strategy with them.
And there's intense scrutiny on whether or not they are trying to exfiltrate that strategy. With AGI research, it feels like even if I go develop a transformer at Google, like it's open source immediately with the paper and then even the secrets about oh reinforcement learning with human feedback is important.
Like that just kind of leaks out immediately and deepse can clone it. Like it it just feels like a much more porous environment over in tech. And I don't know if that's just the legacy of like the open source community, but can you walk us through kind of the the comp between the two organizations?
It is it is such an interesting dynamic. We just had Mike on from from Arc Prize and he was saying we need new ideas.
The issue is if you pay somebody a hundred million signing bonus, they come into your organization and generate a new idea that gets us, you know, one step closer to what super intelligence or whatever, you know, you want to define as like what what people are aiming for.
And then immediately it's like it's actually not really IP and it just sort of like can't really patent its out. You can't patent it and then everybody benefits, right? So, but yeah, what's your take? It does seem pretty porous. I mean, people are moving back and forth. I don't think this was true.
I mean, when you think back four or five years ago in AI, people were kind of very loyal to these institutions. Um, it does seem like that's changing. I mean, it is really hard to say no to these type of big numbers.
And so, I totally understand why people are saying, "Hey, this is a life-changing amount of money for my family. Of course, I'm going to do it. " Um, and then I think to your point, the question is in the high frequency trading world, there's non-competes.
I mean extremely complex kind of contracts when they when they sign people garden leap all this stuff to prevent the secrets from leaking out. What we've seen in AI now is with people moving fluidly between these organizations. It's basically impossible to keep anything within one organization.
I roughly like to think of the AI ecosystem as an ecosystem like all of these players are kind of contributing to this body of ideas. There's no proprietary IP. Maybe you're gonna have compute scale and there maybe there are emotes there. But yeah, it's unclear actually how that evolves and what you can keep in house.
I do think maybe one dynamic at play here is remember reading in the Steve Jobs bio there's a story of Steve Jobs recruiting 50 people. He had 50 people working with him on like the sort of groundbreaking product that was going to make Apple and it actually worked.
And then you you read about Elon and the 50 people working on Tesla autopilot. There's sort of this magic number 50.
I don't know where it comes from, but it does seem to repeat throughout tech history of 50 people is kind of the largest organization that you can get where everybody is talking to everybody and you're achieving incredible results.
And so that if that is an art imagine if you take that as an artificial constraint and I think that is what what's happening with this lab that that meta is organizing at least I read in Bloomberg it's going to be about 50 people you know if if you impose that constraint then suddenly all of the math also changes because you're like okay well 50 times 100 it's actually only $5 billion.
Sure you spend $5 billion on talent. Yes, if you believe that you're going to get to AGI. So I also think that the con the artificial constraint matters and interesting there's some rationality to that artificial constraint.
You what we've seen as these research organizations get bigger and bigger is you're not producing more results as you get as you get more headcount. There's a sort of a paro. The top 20% of people produce 80% of the results. We need a new coinage for that. Like the two pizza team is well defined. This is like the 10.
This is called people call it K's law. Oh yes. Yeah. Okay. Yeah. I'll take that. Consized team. one con team. Yeah, that that a con. It's just a con. It's just a con. Yes. Um uh yeah. Yeah, that's fascinating. Jord, do you have anything else? I I I was interested if you had a reaction to the gentle singularity.
It's published on Sam's blog, which means that it's not directly content marketing. It's not directly from OpenAI, but but obviously uh you should read into it in multiple ways. Did you did you have any specific reactions to that?
The question about that is always like disruptive innovation or sustaining innovation and that ties to meta strategy.
But I'd love to know like it feels like, you know, my my question I've been asking today is how many how many unprofitable AI, you know, multi-billion dollar AI labs can the capital markets support over the long run over a fiveyear period?
if if um if we if we stall out for for a few years in terms of you know really meaningful progress which uh you know Mike has said people aren't making re at least against the arc prize there's not a lot of progress happening right now openai is actually in a great position they have a subscription business they have a consumer tech company that has a lot of revenue is in is in a good position but there's this tension between the labs where you have billions of dollars on your balance sheet you you you in theory could have a lot of runway but at the same time to make progress you have to spend a lot of money uh both on talent and you know different you know training runs and and data centers etc.
So I just have this question around kind of like the next three years uh as like a a very kind of interesting period. Yeah, I think there's two pieces of that. I mean one is and I think about this a lot is like the long run in AI. What does that actually mean?
And I think that we you know there were all these essays being published last year right like AGI is coming in 2026. It is interesting how the narrative has changed in the last 12 months, right? A year ago, you had all these people saying, "Hey, I'm one of the hundred people who knows.
I really am resistant to these type of arguments. I I find it to be frustrating. " But, you know, I'm one of the hundred people who's in the social circle where all my friends are building AGI and AGI is coming next year and you guys are all crazy if you don't see it and just just be aware.
You know, it's like life is going to change dramatically. And then now we're at the gentle singularity, right? Like it's sort of interesting this contrast. That's what I'm saying.
It's a huge contrast that's very convenient if you have a consumer tech If you have a consumer app that billions of people are going to use in the next few years and there's a bunch of different ways to monetize that. And for me, I would tie it back. I mean, I did this math last year, the $600 billion question.
It was initially a $200 billion question, but it was basically like, hey, if you look at Nvidia revenue, you can use that as a proxy for total data center spending. We're spending $300 billion in data centers. We need to make $600 billion of revenue off of those data centers to get a 50% gross margin. Yeah.
And so I had done this math and then I basically said, "Hey, you know, total revenue in the AI ecosystem at the time opening I had about three billion of revenue. " And I I did some rounding and said, "Okay, give everyone else a ton of credit.
" And maybe there's 50 billion of revenue, but we're like 10% there, right, in terms of actually generating the revenue the ecosystem needs. And now 12 months later, you know, OpenAI is at 10 billion, the coding AI ecosystem is at three billion, but we're we're still dramatically undermonetizing this technology.
And to your point, in the long run, the question becomes, how long does that sustain? And I have this sort of mental model now of AI as it's sort of being carried by its own momentum. I think of it almost like this slingshot you're swinging around. And it's like it's sustaining itself by its own momentum.
And there's this arms race and there's this sort of microeconomic game theory of how each player is reacting to each other. Yeah. But at the end of the day, it's momentum that's carrying it. And at some point, maybe we get this AGI thing and then it's like all worth it.
Um, and in the long run, I am very confident it's all going to be worth it when I'm 80 years old. AI is going to be everywhere. But what do you do in the medium term? And I think nobody's talking about this right now, which is this sort of about face or this U-turn from the one year ago.
You guys are all crazy if you don't see AGI coming immediately to now. I was listening to to to the podcast that with the hundred million dollar signing bonuses and it's like, well, you know, AI actually hasn't changed people's lives that much. It's going to change people's lives later.
I just think it's interesting and these narratives change quietly, right? People don't talk about them and then they sort of quietly change. Well, there there there are big labs that directly benefit from the narrative that AGI is a year away.
And then there are labs that will benefit greatly from a gentle singularity and that their competitors will struggle to raise additional capital in the long run, struggle to compete, struggle struggle to retain Y talent. Yeah, I know exactly what you're saying. Makes sense.
Also, I mean it, you know, and I don't think this is one company. This is the whole ecosystem has to deal with this, but there were a lot of promises made a year ago. Yeah.
And um I think a lot of people would like to ignore those or like what's going to happen when we pass all these deadlines where we've been told like that's AGI. Um I just think that's interesting and clearly if not we're not that's not changing like we're upping the ENT right now.
It's like millions of dollars to people. But I guess this is part of why I think you take things to such extremes is um everyone believes the prize is so big and now you have to up the ante.
So I think we're just going to keep seeing until for a while we're just going to keep being in this phase of everyone upping the ante to say, "Okay, we're not there yet, but we're going to get there. We're going to get there. We're going to get there. " Um what does that look like?
Well, this was a fantastic conversation. I want to have you back on as soon as possible to go way deeper into what this means for the early stage and mid-stage markets because I'm sure you have a lot of visibility there. Um, but we'll let you go and and get back to the rest of your day.
Um, but thank you so much for glad we coined a new term, a con. It's a group is a talented group technologist building the future. One con get your con. Get yourself a con and make it happen. Thank you so much for this is fantastic. I'll be right back. Talk to you soon.
Uh next up we have Walden from Cognition coming in keeping the AI chat going uh talking to him about