Poolside's Eiso Kant on securing 40,000 GB300s from CoreWeave and building a vertically integrated AI lab

Oct 16, 2025 · Full transcript · This transcript is auto-generated and may contain errors.

Featuring Eiso Kant

fracking boom. All right, we've got our next guest coming in. Okay, let's bring him in to the TV show. Welcome to the shop and how are you doing? Hey guys, good to see you. How are you doing? We're pumped to have you.

If if for people that were watching yesterday, we were reading through the article about your guys' new project in West Texas. And I was like, poolside? Like, poolside's going this hard? Yes. And when we realized that it was you, we were pretty excited. Yeah. So, please tell us the the the story of the company.

Could you can you start kind of at the beginning uh little bit on your background uh then the first act of the company because it does feel like you're at a very unique moment in the company history. Yeah. So my story into space actually started I was listening a couple of minutes before to your interview.

You were talking about Andre Karpathy and him writing code. I've actually never said this. My story into space started because Andre Karpathy wrote an article in 2015 called the unreasonable effectiveness of recurrent neural nets.

It's one of the early kind of blog posts about language models and it captured me so much that I ended up pivoting my entire company towards it called sourced and we spent the next four or five years building language models uh that were capable of writing code. Mhm. Sounds great today.

Back then 50 people in the world cared. Yeah. Uh and uh but 2015 was an interesting moment in time because it was followed the year after by Alph Go coming up.

M and at that point I built probably an unreasonable conviction that the combination of language models and reinforcement learning should be able to generalize to frankly anything and everything that you can approximate including human intelligence. And uh that was kind of my origin story.

Uh I met my co-founder in 2017. He was the CTO of GitHub. He made an acquisition offer for that company. The guy wait you made an acquisition offer for what company? So, GitHub made an acquisition offer for my company in 2017 because we had the world's first code completion models that were working back then.

Makes sense. Uh, turned down the offer, but nonetheless became really good friends. Uh, and this company ultimately ended up not succeeding. Uh, we were too early. And so, you can kind of figure out what happens when on November 2022, you see Chach come out. Yeah. Right.

It's everything you've been saying for years kind of into the void somebody else just did and did in an incredible manner. And so it kind of gave this really deep realization in that time that everything was about to change.

It was kind of like a pre-post electricity moment like we're going to truly fully reach human level intelligence and go beyond. But the narrative at the beginning of 23 was all we have to do is scale language modeling. Let's just size them up, predict more tokens on the web. And just fundamentally disagreed.

So you know the our our view and it's why we started Poolsite two and a half years ago was that reinforcement learning was going to become the most important scaling access for model capabilities. Uh first 18 months of the life of our company that felt like one of the most contrarian opinions you could hold.

I think today it's it's it's clearly not anymore. But that's kind of our origin story and why we decided to build a foundation model. And what was the what was the go to market that you had in mind?

I mean we've seen the the market play out where there are uh kind of like synchronous IDE based code completion tab completion models. There's a lot of custom models in that world almost like I call them like proumer use cases where I hit GPT5 and it winds up writing code and I didn't even ask it to.

And then there's the codecs, there's cloud code, there's agents, there's wind surf, there's cursor and there's so many different positions. like how did you see the market map develop from your world? So we bring it back to like what ultimately our mission is, right?

We're here to to reach human level intelligence and go beyond. And in that world, intelligence in our view is a commodity. Mhm. It's actually a commodity that gets created by only a small number of companies because of the sheer amount of resources and kind of compounding efforts that go into it.

But I think at the limit, we're not going to find very large differences between one foundation model and the other. And so I think as a foundation model company, you're in two businesses. You're on one hand in the what I often refer to internally as the barrels of oil business, right?

You're just selling your tokens behind an API and that's your commodity business. And in a commodity business, you care about cost and scale. And they'll come back to a little bit why the info project so important. Yeah. The second though is what do you choose to do with that commodity? Right.

Once you have intelligence available to you, who do you want to be for? And from day zero, we wanted to be for the world's frankly knowledge workforce. We wanted to be for the enterprise. We wanted to be for the world's most like high consequence environments. So our models are now rapidly progressing in capabilities.

It's it's definitely been nonlinear. And now we see a path to be at the frontier. But when we weren't at the frontier, we kind of decided to cut our teeth in a go to market area which was really no one else was at which was defense and and government. Oh, interesting.

And so kind of our first customers were not just building the model but also building all the crazy enterprise stack to be able to deploy it anywhere like literally and workstations and Humvees all the way to like the larger models and like air gapped environments or guff clouds or places where you needed ATO's.

Uh and now we're kind of on track to start expanding out of that defense sector and going into the wider enterprise not just with coding agents by the way that's been our really our starting point.

It was our view that's where the market was first going to go adopt for frankly not a very intelligent insight just the fact that developers we've always been the first adopters of technology. Yeah. So it was kind of clear that that was going to happen and and it's and it's really fun to build tools for yourself.

It's less fun to build tools for a role that you've never done. Right. Yeah. Look, intelligence is we treat it as this one northstar like intelligence, but in reality, it's actually really multi-dimensional, right?

Like how good you are at writing poetry is very different than how good you are at writing code versus how you are, you know, being a researcher in biology.

And so the thing that was the unlock, I think, in our space is that, you know, first generation of models had only been trained on the output of humanity's work, right, like the web, but it wasn't trained on the thought process and actions that led to the creation of that work.

And it was going to be clear that coding was going to be the first domain where we could do that through reinforcement learning because we could simulate it, right? So, so we spent the last two and a half years building what I believe is the largest RL environment in the world.

It's a million real world code bases where you know our agents can do hundreds and hundreds of billions of tasks. And so that's kind of it was it was partially close to our heart and but it was also frankly very close to what we saw the path to where it's like more capabilities and models was going to run through.

get into verticalization. Yeah. Talk about the core weave deal. So the core weave deal we announced has two components to it. Um we've been able to do in the last 2 and a half years what we did with 10,000 H200s. Think of that as like annually 100, you know, 2050 million worth of compute. Yeah.

Uh but it's orders of magnitude less than what others had. And so we built our efficiencies around orders of magnitude more efficient experimentation. uh but your big model run is still your big model run. Yeah.

And so that like the size of a model you can train has uh and the duration at which you train it uh has clear correlation with the final intelligence and capabilities.

So we needed a lot of GPUs very fast uh because we saw now that hey we had gotten to a point where our models had gotten so good now that we knew if we'd scale them up we'd be on track to be at where the frontier was going to be. And so that really started uh with conversations with with Nvidia and Cororeweave, right?

and and uh for kind of obvious reasons and we got really excited because we found a path to partner with coreweave that brought online more than 40,000 GB300s really quickly and I don't know how much you guys have discussed in the past how much you guys in the in the past kind of discussed you know like the compute market but right now that scale of compute is impossible to get it is sold out for the entirety of 2026 and already like well into 2027 so we founded strategic partnership that that allowed us to do so.

And uh so that gets computer online December, but that gets you to the frontier, but what then?

Like you can build the world's most capable model, but if you're not able to serve it, if you're not able to scale it up further, if you're not able to train the next generations, you're frankly you're not in this race, you're just cosplaying. And so we had to take a step back and this was already a while ago.

This was already started looking at this, you know, a year ago. And uh so well the true bottleneck in our industry is not chips and it's not energy. There's a lot of like 400 kilovolt electricity that comes off the grid. There's a lot of sources of energy in the United States.

And while at the limit it is the bottleneck, it's not the immediate bottleneck. The actual bottleneck is bringing it all together and actually having powered shells like data centers online. Mhm. Because while last year I could call someone for 50 megawws and I could kind of, you know, get it within six to nine months.

If I needed to call someone for 250 megawws, guys, there's no one you can call. At least you can make like a multi-billion dollar payment or commitment, you know, today on a 15-year lease and then you can get it in 18 to 24 months. Yeah. But I'm not meta, right?

And so we we understood that we had to own that vertical stack entirely if we were going to be able to secure our future.

when how early did you guys make the call to focus you know clearly it sounds like you're focusing intensely on the go to market side as well right we've heard you know there's plenty there's labs out there that have like they they're taking the route of like you know if we build it they will come type of thing and it very clearly even before you guys are at the frontier you're saying like no we're going to get customers we're going to get actual enterprise use cases we're going to be focused on delivering value and then hopefully those products just get better for the customers as the underlying intelligence improves.

But uh do you feel like that uh uh yeah like was that just an easy obvious decision to make or did you guys like debate that a lot internally?

So it it was kind of the DNA from day zero who we wanted to be and and so we wrote ourselves and my co-founder like a day zero memo and we often go back to it to see you know like what's changed along on that and we kind of understood early on that there were three layers that were really going to matter for the next decade.

It was going to be energy, comput, and intelligence. And in our view, a lot of things were going to become rounding errors compared to those three.

And but you also knew that if you if you see intelligence as a commodity, right, if you treat it like a barrel of oil, a barrel of tokens, someone else is going to deliver it, right? Exactly. And and look, and so you care about your cost and your skill, hence infrastructure and vertically integrating.

But you don't want to be in a commodity business. is you want to be in a business that either increases someone's revenue, right? Or improves their cost basis, like it helps them grow their business. And so from day zero, we said, well, we're not a consumer company. It's probably hearing how I talk. It's not in our DNA.

I'm not a chat app in your pocket. This is not I wouldn't know how to build that. But we deeply cared about businesses and enterprises. And so it was a day zero decision uh to do this. And so from day zero, we built both parts of the org, our applied research org and what we call our production engineering like org.

this 2 megawatt facility that feels like leaprogging a lot of the one megawatt clusters that we've seen uh talked about whether it's Colossus 2 or what Meta's doing with Prometheus and what OpenAI and Enthropic are doing like is that the intention or do you think that you're more just like catching up to the frontier and they will be coming online with similar capacity around the same time and you're differentiated on the type of model that you'll train.

So there's big headlines of gigawatts of power and and we had one of those yesterday. Uh and we have an incredible amount of power on that land. We actually have six gawatts of gas that sits there that we can that that we bring turbines online for it to turn into electricity.

But what I think really matters in as a foundation model company is your lead time from when you need compute to scale to how long it takes to get online. Mhm.

And this is where the partnership with coreweave is really interesting because with coreweave as the tenant in the data center we have the ability to determine ahead of that data center coming online how much of that compute goes to scaling pool side and how much of that compute we might have to put in the free market and and they're of course world class like I mean I cannot speak highly enough of their ability to operate like large scale compute and so for us it was it was not about the big headlines but it was about building a company that could incrementally deliver data centers And the first 250 megawws are actually delivered in a quite unusual manner.

There wasn't much about this in the press cuz it's kind of a geeky topic, but I think here we like geeky topics of course. So if you look traditionally at data centers, there are these big sticku buildings. Everything comes on site is manufactured and put together on site.

Uh and that's been the vast majority of the data center industry. But mobilizing large workforces and and dealing with that complexity means that your ability to scale is not really incremental. I can't add an extra thousand GPUs, another thousand, another thousand.

And so we took a different approach here where we've got the big stick build that we're building, but it's a long corridor and we're bringing around essentially data halls of of 2 megawatts at a time. Uh current generation GPUs would be be a thousand GPUs. Next generation that's less because they're more dense in power.

But what we're doing is we're doing off-site manufacturing. So a data center for GPU compute is effectively three layers. It's an electrical skid, it's a cooling skid, and it's a comput skid. Yeah. And you actually designed them that they fit on the the back of a flatbed truck. And ah Mark, good to see you.

We have Mark Benny off. Uh we can go way deeper with you, but thank you so much for hopping on the show. We will talk to you later and we love you back very soon. Thank you so much for joining. How you doing Mark? Good to see you. Congratulations on all the fantastic news. Thank you so much for joining the show.

How are you today? We don't have audio. Let's make sure that we have audio for Mark Benoff from Salesforce. He is the CEO of Salesforce. Let's bring him in to the show. Thank you so much for joining. Um, our team will sort this out in just a second. We can see you. We can't hear