Space data centers could be cheapest AI inference within 3 years as orbital cooling and launch costs improve
Dec 19, 2025 · Full transcript · This transcript is auto-generated and may contain errors.
Featuring Pranav Myana
Speaker 2: We'll find out. Well, Pranav, welcome to the show. Thanks so much for bearing with us while we had to reschedule you. We appreciate you taking the time to chat to chat with us on the last show of the year, Friday, December 19. Would you mind It's an honor. Off with a little bit of an introduction on on yourself? And then I'd like to go into the project, then we can ask some questions about space data centers, is the the topic of the of q four twenty twenty five.
Speaker 8: Yeah. Well, of course. Well, first of all, I wanna say thank you so much for having me on here. And what a group of handsome young men we have here today. Space data centers.
Speaker 3: I mean, you're looking at Tyler here is the youngest and the most handsome, so he's off camera, but he's here. Look at him. Look at him. Look at him. Look at our look at our little
Speaker 2: Oh, he still has the Giga Chat filter on.
Speaker 3: That's crazy.
Speaker 2: That's a little more subtle, but definitely filtered.
Speaker 3: Definitely still on.
Speaker 2: Yeah. Hey, I'm sorry. You you were saying?
Speaker 8: Space data centers. Yes. Fundamentally, if you're betting against space data centers, you're betting against compute to grow. Okay. So we're constrained on Earth by land, water, and power. Mhmm. And our human minds haven't evolved to understand just how much space there is Mhmm. In space. So, as you look at these things like these you know, Google and Microsoft, for example, have hundreds of millions of dollars of GPUs just like sitting around and collecting dust.
Speaker 2: Mhmm.
Speaker 8: And this is like probably surprising to some people not in the energy industry, which is my background.
Speaker 2: Wait. Wait. Wait. Hold on. So so you're saying they have they have hundreds of millions of dollars of GPUs sitting around because they can't get enough power for them? Yeah. Wow. Okay. Continue.
Speaker 8: Yeah. And there's so much, like, cost involved in that. Right? Like, the GPUs might get old, and they they have to get new GPUs. And there's so much risk that a lot of these models haven't factored in, and even mine hasn't factored in yet. So there has been a little competition, you know, a little model that came out, and I'm making
Speaker 2: It's the model wars. The space data center model wars.
Speaker 8: I'm making a pretty big update to my model today, and one of my idols is gonna share it around, and we'll hope that a certain someone gets to see it, take a wild guess on who that is.
Speaker 2: Yes. Yes. Yes. You were very prolific with your tagging. It was it was it was a good strategy.
Speaker 8: Went Oh, there's there's a few more points I had There's a few more things I wanna spice up there, but we'll get to that later. Sure. So my background is in energy, and a lot of people not in energy don't know probably don't know this, but everybody projects the cost to rise and only rise, you know? And as we, like, have more data centers, we run into more constraints with the ground, like, again, land, talent, because you need to like put talent in in all these like different places instead of creating these factories and just like shooting them up to space. And then power and then water. Right? There's only such a limited amount of that that we can have on Earth, and we have so much more ability to do that on space. So if you don't believe that there's gonna be like an AI revolution, if you don't believe that compute is gonna grow exponentially, you don't believe in like
Speaker 3: yeah. So so I guess part of the debate is, that's important is I haven't seen anyone that says we will never have large data centers in space or we will never have a lot of compute in space. I feel like the debate debate has been much more centered around the timeline. Is like Yeah. And is it a three to four year thing? Mhmm. Is it a ten to twenty year thing? You know, what what is the timeline?
Speaker 8: So the timeline, I think, like, Elon had a tweet the other day which said, doing AI like, localized AI inference on the satellites will get them to be the lowest cost way to generate AI bit streams in under three years. And I was working on independently validating that, and I'll like send out the the model later today. But I think it can be earlier than that. You can actually like send you have so much better constraints on space. Like, constraints really ease up on space. A huge part of the energy used in ground is cooling. A huge part of it is, like, the power. You know, the same solar panel you have on Earth gets so much more utilization in space. So inference, I think inference will be coming on to space very fast, a lot faster than a lot of people think. And then another thing you guys talked about is speed of the models, and that models are plateauing, and that speed matters.
Speaker 2: Mhmm.
Speaker 8: So if people really believe in that, ground data centers, if you're close to a ground data center, it would be the fastest. But to the 80% of the world that's like non The US, non Northern Virginia, not in like DFW, there is a huge need for the latency.
Speaker 2: Yeah. I like that. So
Speaker 3: wait Can you talk about heat dissipation and cooling that Brian in the chat's asking? And I feel like that's been a big question again that keeps coming up.
Speaker 8: Yeah. So there is a huge problem with heat dissipation. That is the constraint that we go up against first.
Speaker 6: Mhmm.
Speaker 8: Right? And the reason for that is heating and cooling on ground works very different than cooling in space because space is a vacuum. Mhmm. So on ground, you have like fans and stuff and you have convections. You you have mediums to pass this heat through. But in space, you don't have that. Right? So you have to do passive cooling, and you do that through radiators. And these radiators are, like, these really big and, like, really complex systems, and they there's this thing called Boltzmann's law, which basically means, like, the higher temperature you can make something, the better it is at dissipating heat, but there's a limit to how high the temperature you can make the radiators in space. And the reason for that is, you don't wanna get it too high such that you'll melt the GPUs. Right? So the
Speaker 3: You wanna melt the GPUs with, you know, image requests.
Speaker 8: Yeah. Yeah. That that's what we're hoping for. So, like, the current designs that we have in Starlinks are, like, solar panels on one side and then radiators on the other side. But there's no reason to believe that, like, that will be the enduring case. You know, radiators are a hard problem to solve, but, like, the physics has worked out. It's the engineering, you know, the arcing, like, the power electronics, all that kind of stuff that we need to figure out in space, but that's an engineering problem that will definitely be solved. So what we'll see is, like, these deployable structures, which are, like, radiators that are, like, folded inside, and then they go out in space, and then they they, like, fold out and deploy. And those will be, like, dedicated radiators and, like, dedicated solar panels. Like, thermal is the biggest constraint, but there's no no reason at all to believe that it won't be solved.
Speaker 2: Okay. Walk me through your assumptions around the progress of just getting mass to orbit. I assume that your model, you know, expects Starship to be massively successful and scale very quickly. But if if if progress in space in the space industry stagnated, essentially, you know, we get stuck with Falcon nine, Falcon Heavy or something, that would be pretty bad for the model. Is that right?
Speaker 8: Yeah. That would be bad for the model, but that's like another thing. You know, let me stoke the flames of the model wars a little bit. Yeah. Okay. That's another thing that the other model didn't take into account is these learning rates. Right? Like, it costs $60,000 a kilogram with a space shuttle, and Falcon got it to like 1,500. Okay. And, you know, if if we Like, for example, if we modeled that computers were gonna stay the same level as they did in 1980
Speaker 2: Yes.
Speaker 8: We would have like a 100,000,000 times more. Yes. Like, they would be a 100,000,000 times more expensive than they are today. And I know someone else, Deleon came on the show a little while back, and he talked about how he hasn't seen a compelling argument for data centers in space. I tagged him, I DM'd him. We haven't heard a response yet. I'll send out the model, like, updated model
Speaker 2: over course the clip too and Yeah. We'll see what But he so so I I wait. Hold on. Hold on. I I so I actually agree with you. I I I don't believe in stagnation in in, in mass to orbit. I I do think that Starship, although there have been some, you know, minor setbacks, I think it's gonna be a massively successful product. I think it's gonna grow exponentially, and I think we're gonna be able to put a lot of mass in orbit very quickly, especially if we have something good to put in space like a data center. I'm a believer in that. Now what I'm what I'm interested in is, like, what if we are fundamentally in the really, really good timeline, and not only is AI unstagnated and space travel is unstagnated, but what if nuclear fusion and power generation on Earth is unstagnated and we see nuclear power become 10 times cheaper? Does that break the model just on a competitive basis? And it's like, it's amazing. We can get to orbit really cheap, but we can also get really, really cheap energy here because all the nuclear folks, who I'm sure you've seen come on the show from time to time, everything that they're doing is working too, and so energy on Earth is, like, way cheaper than what we thought. Mhmm.
Speaker 8: Yeah. So we have a mutual friend, Robin Langtry of Avalanche Fusion.
Speaker 1: That's
Speaker 8: right. And I was talking to him. He was helping me out, and this is what I'm modeling right now
Speaker 2: Okay.
Speaker 8: Which is fusion data centers in space.
Speaker 2: What? Wow. Okay. Let's go. Yeah. Because I mean Sam is an investor in Helion too, and and so that you you could imagine that he's thinking about energy, you know, years and years in advance.
Speaker 3: Are you considering volcano data centers? Volcanoes? Active volcanoes. Geothermal?
Speaker 8: Well, that is space con like, that is land constrained. You know, there's only a limited number of volcanoes. But there's a lot a lot of space.
Speaker 3: How are you thinking about have you tried to more precisely identify what the launch cost would be of like a single satellite that's capable of inferencing a model for use on earth? I'm I'm trying to you know, there's new parts, right, panels Yeah. Radiators. Yeah. It's, you know, this the basically the racks themselves.
Speaker 9: Yeah.
Speaker 3: And I I feel like that's, hard to, know exactly, but you can probably just zone in on it.
Speaker 8: Yeah. A 100%. So if you look at the, the simulator I made, and then you you go ahead some years, you can go to the sandbox and change some of the parameters, Then you can look at the physics and limits tab, and it breaks down the mass per satellite. So it breaks down, like, the panels, the radiators, all the other components that go into the satellite, and then it breaks down it down as, like, a percentage. So you can actually see and visualize, like, all those components.
Speaker 3: Very interesting.
Speaker 2: What else has to happen? How are you thinking about understanding the fiftyfifty point? Me and Jordy were going back on, is it one gigawatt of capacity before 2028? That I think that would impress both of us. Mhmm. Would that impress you? Or is that is that your base case? Like, take me through some, like, how you're thinking about the future development of this. I think we're all we're all on the same page that, like, it's feasible. It's possible. So the interesting question is how fast can it actually ramp? Because there's certain things like, you know, Starship just has to, you know, be reliable and they have to build a lot of them. And there's some, like, rate limiting factors that might just act as, like, little natural breaks. I mean, it's at a certain point if, like, TSMC runs out of capacity, like, okay, you can't get any more chips. There's all sorts of different shortage points. But how are you thinking about the the scale and scope of of data center space compute in kind of the medium term?
Speaker 8: I it's hard it's hard to say medium term. Yeah. It's like like, you know, it's it's hard to you see somebody, it's hard to like predict how their next day will go. Yeah. But you can predict how their next year will go. Sure. You know? And this is like a longer scale. So it's hard to predict the next few years
Speaker 2: Yep.
Speaker 8: But over the next decades, there will be hundreds of gigawatts in space. I am sure of that. Sure. And like
Speaker 3: We will we will clip this. It will either you will either you will either look like super genius and be immediately hired by Elon or or or you'll be Probably already be No. No. There's some there's some middle ground. I'm I guess, one one question is who
Speaker 2: who Yeah. Sorry. Go. Go.
Speaker 3: Go for it.
Speaker 8: Oh. For some context Yeah. There's like 20 like, over 2,000 gigawatts sitting in the interconnection queue right now, and that's like almost two times the entire US grid capacity, like just waiting for paperwork. I mean, the biggest threat to AI is really like a guy named Doug at the county permitting office
Speaker 2: Yeah.
Speaker 8: Who hasn't been there in three weeks. And space isn't like constrained in that way.
Speaker 2: The permitting thing is crazy. I mean, it is much easier to sort of do business in space, it
Speaker 3: seems How do you do you predict the market will evolve? Do you think anybody can actually compete with SpaceX here? What do what do you think about Star Cloud? Do you think like Core Weave is eventually like, okay, I guess we gotta go to space now. Yeah. Yeah. It's hard enough in Abilene, but I guess you're going to space.
Speaker 8: Yeah. So Star Cloud, I think they're doing really interesting work, and I'm really interested in seeing what they what the results of their their stuff that they're doing right now is. Because if the results are that these chips that they put out in space without, like, rad hard, without a lot of, like, rad hard measures Mhmm. Are functional, then we can get there, like, a lot sooner than, even my projections.
Speaker 2: Interesting.
Speaker 8: And then when you look at, so you said Tesla and Core Weave. So to go on the CoreWeave point, I think just like the way that space and compute has been, what's it called calculated and the cost of compute has been calculated, it needs a complete overhaul. So, like, I so someone else did, dollars per, watts of power. I did dollars per compute, but I think the best way is dollars per GPU hour with SLA, so like service level agreements. So a lot of it is just like taking into account the capex or whatever, but it should take into account capex, hardware amortization, replacement rates, maintenance rates, opex, and all these kinds of factors. So like you can think of, power as like if you're a car factory, power is, you know, how much you expect, like, the car like, the the throughput of steel to be. And then the, compute is like how much you expect like, how much cars you expect to come and the cost per that. But what the best measure is is the lifetime of a car, seeing the OpEx of that, the maintenance of that, the gas cost of that over its, like, entire lifetime. Right? And that's the best way, to model these things, and I'm gonna come out, with a white paper about this. But this is, like, really important, and a lot not enough people are talking about this at all. And on Tesla, it's yes. It's really hard to imagine, how this might look without huge vertical integration. I'll say that.
Speaker 3: How are how are you trying to calculate, depreciation rates? This is already a debate on planet Earth, and I could imagine in a different environment, you could have, you know, maybe be surprised to the downside, you know, needing to depreciate GPUs faster or who knows? Maybe there's some
Speaker 5: Yeah.
Speaker 3: Some upside Yeah. To it.
Speaker 8: So part of it is like Moore's Law. So Moore's Law is hitting its physical limits right now in terms of how many transistors you can put on a chip, but there's architecture changes that you can make that can make it better. But I'll actually throw a curveball at you guys. Something that people have been talking about. I think the future is not, you know, just AI and like orbital data centers. It's optical orbital data centers. It's photonics. Woah. You know? And photonics are like They're so good at matrix multiplication. That's like inherent to their to what they do. And the space And like the heating constraint is like way lower by 10 to 20 times because you can you can think about like these electrons moving in electronics. It's like you're you're pushing like a a heavy box like through like a rough floor and it's like interacting with all the mediums and like causing all this friction and heat. But when you have optical stuff, it's, like, going through waveguides, and it doesn't interact with the medium as much. Mhmm. And photonics, it's very, very early, but this will a 100% be the future.
Speaker 2: So, type different of chip. When you say photonics, you don't mean like optical cables between satellites physically like the drones in Ukraine that are like physically wired to each other?
Speaker 8: No. Those are great.
Speaker 2: Different chip in space, same constellation of Starlink.
Speaker 3: We we might need some we might need some financial innovation here if we could if if we need a lot of debt to finance these space data centers. If the debt goes bad, maybe we could attach some rocket boosters to it and just blast it out.
Speaker 2: Yeah. How's
Speaker 3: Deep into the yeah. Just put it put it Sun. Yeah. Send it sends it into the sun.
Speaker 8: That's actually We'll see we'll see
Speaker 3: I feel like I'm working on putting Dyson sphere actually.
Speaker 2: I am going to the sun.
Speaker 8: I actually am.
Speaker 2: Yeah. We Wait. Are you really gonna build the Dyson sphere model? What what it will take? Yeah. Are you are you over are you Dyson sphere before 2100 or after 2100?
Speaker 8: I I like to go by the math first. So I'm still trying to get the math and physics right. Okay. But you will definitely know.
Speaker 2: But gut intuition before or after? Just gut.
Speaker 8: I'm an optimist, so let's go before.
Speaker 2: Let's go. Let's ring the gongs. Let's go. Well
Speaker 3: This was super fun.
Speaker 2: This was super fun. Thanks so much for coming on the show.
Speaker 3: I got a feeling I got a feeling you're gonna bait the the the Elon repost.
Speaker 2: He's gonna come for it.
Speaker 3: I think he's gonna come in hot, but
Speaker 2: No. It's not the yeah. Yeah. Yeah. The repost. The the the quote tweet interesting. The quote tweet this is this is true.
Speaker 3: Yeah. Or the quote tweet thumbs up. Any
Speaker 2: day now.
Speaker 3: Anyway, super super fun conversation. Great to meet you. We'll talk to soon. Excited to see more of your work.
Speaker 8: See you so much.
Speaker 5: Cheers. See you guys.
Speaker 2: Gemini three Pro, Google's most intelligent model yet. State of the art reasoning, next level vibe coding, and deep multimodal understanding. We have our next guest already in the restream waiting room. We have Anna Goldie from Recursive Intelligence. How are you doing, Anna?