SemiAnalysis ClusterMAX 2.0: CoreWeave leads, FluidStack debuts gold, AWS networking is a debugging nightmare
Nov 10, 2025 · Full transcript · This transcript is auto-generated and may contain errors.
Featuring Jordan Nanos
Speaker 10: That's right. Yeah.
Speaker 2: What are the biggest findings? How do you introduce Cluster Max in the, you know, 140 characters version? I'd love to just get into the actual findings.
Speaker 10: Yeah. Well, it's it's great to be here with two of my technology brothers. Thanks.
Speaker 1: There we go.
Speaker 10: I can definitely yeah. I can definitely take you through cluster max. Basically, the way we see it, some of the most critical transactions in AI are happening when people are renting GPUs from a provider. Yeah. So this is companies like CoreWeave Yeah. Navias, Lambda, FluidStack, Crusoe. They've been coming to market and taking share from, you know, AWS, Azure
Speaker 2: Yeah.
Speaker 10: GCP, Oracle. And I think a lot of people basically, they think that all GPUs are deployed the same, and that's definitely not the case. So we go through in this article a lot of details about how our hands on experience and then feedback from customers kinda makes it clear that, it really depends which provider you go with when you're making a critical decision, like where to buy your GPUs.
Speaker 2: Yeah. How do you, how do you actually feel like you're getting accurate data? Because if if a semi analysis email shows up and I'm running a Neo Cloud, I'm like, yeah. Let's put them in the best data center we have, I imagine. So is are you it it feels like blind taste testing. We were just talking to a restaurant reviewer. I feel like you're a data center reviewer. There's these segments are actually pretty similar.
Speaker 1: Fake fake companies. Even get them into YC and just sign up.
Speaker 2: Yeah. Something's going. But, yeah. Talk about the pitfalls of the process or, like, how you actually tease out what's actually going on.
Speaker 10: Yeah. I don't know if we're really doing a secret shopper thing quite the same. Like, we definitely got CTO on the phone in some cases to walk us through the greatest. But in my research, I talked to a 140 different customers of these Neo Clouds to get their perspective on exactly what their experience was with reliability and support at scale. And I think it was quite revealing because, like I said, yeah, sometimes you get a really good experience in one data center that's quite different from somebody who's running stuff in Iceland or something like that.
Speaker 2: So what changed this, with this version? I know that the actual graphic has exploded. There's more there's gonna be
Speaker 1: more Do we need more do we need more neo clouds, by the way, or are there enough? Do there's you do we have do we have enough?
Speaker 2: But, yeah, what else was what else did you find?
Speaker 10: Hey. We need more, man. We need more of it all.
Speaker 2: Let's hear it go. For more. More. Yeah. You for the tireless work indexing the entire market. Yes. But what what were the findings?
Speaker 10: Yeah. Look. I think we probably don't need a lot more Silicon Valley backed venture startups that are all renting GPUs from another provider and selling them with a bit of software on top of them. But we're seeing a lot of the, like, sovereign AI projects just get started, and it looks a lot like a telecom build out where, you know, different regions or countries have their own regulations or, you know, their own sovereign wealth funds that are funding the build out of a bunch of GPUs. Yeah. And I think that's just getting getting started. So you're right. The biggest difference is we covered 84 Neo Clouds this time as opposed to 26 last time.
Speaker 2: Mhmm.
Speaker 10: Our total market view, there's now 213. Learned about four of them since we published last week.
Speaker 2: I feel like, hey
Speaker 1: I mean, if you're a Neo Cloud and you're not on the list, it's probably fair for the investors to send you that and be like, hey Hey. What happened?
Speaker 10: I mean, we're doing our best to get coverage, but they they pop up all the time with different people who are in different phases of actually launching.
Speaker 2: Yeah. But, yeah, what, what are the biggest movers? What were the biggest surprises? What were the biggest changes? Because it seemed like CoreWeave has done well last time, this time, but there's probably some other folks who have taken taken your feedback and and actually made changes.
Speaker 10: Yeah. I think probably the two big ones are that FluidStack debuted at Gold. Like, we did not rate them last time, and they're they're kind of coming out of nowhere with a really interesting offering for taking other, let's say, bronze or worst tier GPUs and making them usable for frontier labs. Like, they have some really, really big customers.
Speaker 1: Mhmm.
Speaker 10: I think Google improved a lot. If the market had really shifted to using strictly Blackwell GPUs and wasn't still stuck on using some of their old hopper GPUs with their older networking, I would see would have seen Google, you know, possibly push to gold. I think they will in the future. They've they've been really improving. We've got a lot more work to do with AWS. Think they they they're really focused on EFA and the rest of the experience is coming along, but their networking is just look. It represents, like, less than 1% of the cluster TCO by our calculations. And the users tell us it's, more than half of their debugging time is trying to figure out AWS's custom networking on some of these clusters. So Woah. We'll see who takes the feedback and who doesn't going forward.
Speaker 2: That's try to be insane finding.
Speaker 10: Like, include a lot of stuff from those customers in there as opposed to just, you know, a micro benchmark that we run on the network ourselves.
Speaker 2: Yeah. That's super interesting, Jordan.
Speaker 1: Wanted to get your read on depreciation schedules. I feel like there's a big debate and something that's gonna be critical to the health of a lot of these players. And as chips get better, you would hope that that the depreciation schedules, you know, would expand somewhat. But we're seeing it all all over the board in terms of how different companies from hyperscalers to some of these new entrants are actually planning around depreciation. And I, would love to kind of hear how you how you think about it broadly.
Speaker 10: Okay. I think broadly speaking, depreciation is a financial metric that's hard for us to measure and varies between the different providers quite substantially where people take different approaches from three to six or longer years. In terms of the lifespan of the actual technology, like, there's there's two ways to think about this. One is how long does the thing keep running well before it starts to have a lot of wear out failures, like the GPU and the system. And then the other one is how long is it useful on a performance per dollar basis. And I think that second one really depends on how good future GPUs are. So right now, we are seeing providers who are limited on power. They literally have no more floor space in their data center, rip out perfectly good GPUs that are making them plenty of money to replace them with the latest and greatest because they can make even more money.
Speaker 2: Wow. And
Speaker 10: so if that power constraint goes away or the newest, latest, and greatest GPUs don't give you that massive performance increase, then I think people will sweat the assets for longer functionally in the data center. But I think a lot of people on the financial side are playing a guessing game or they're designing it based on how they wanna design the financials of their business and not necessarily taking into account exactly how the GPUs perform or what's coming down the pipe.
Speaker 1: Is that is that bad?
Speaker 10: I don't know if it's bad. I think it's just pragmatic. Like, if you look at a hyperscaler and you see that they have all of these build outs of new data centers coming online and they want to look. They want to address the needs of that customer base and things just aren't coming fast enough. Mhmm. Ripping out old GPUs and putting in new ones seems like it's a good play. It seems like it's good for the people who want the best performance. It seems like them, you know, they're gonna get more revenue per megawatt with the latest and greatest. Yeah. I don't think it's necessarily good or bad for What
Speaker 1: about what about so Satya has talked a lot about wanting a a diverse GPU fleet, fungibility of of the fleet, not wanting to just do exactly what, like, a key customer like OpenAI wants. If OpenAI says, hey. Build this data center for this specific training run. He's not immediately saying, like, yeah, of course, I'll do it. No no questions asked. He's sort of, like, you know, being pragmatic about it. How much do you worry with some of these other neo clouds about being, like, too indexed to a specific type of, like, workload or tech stack given how fast the, you know, space is evolving and ending up in a situation where they just have a bunch of dead weight?
Speaker 10: Yeah. That's a really good question. And I think you're segueing into an article that we got coming out later this week and a fun interview that that some of the guys did. So let me try and not talk about that while still teasing it. Basically yeah. I think the the the concept of fungibility is really important to the hyperscalers. And it seems like in some ways, they are exploiting the neo clouds, for better or worse, who have, you know, four sites. And they're gonna take a whole site and have it all one chip, and that's gonna be a bet for three to five years. And we'll see what happens after that. So in some ways, investors kind of view this as returning the AI market beta versus getting some alpha on an individual chip or on an individual workload. And I think it's look. The there's a lot of upside in this. And to your point, that focus on fungibility or prag pragmatism may have led to Microsoft Alert losing, like, $320,000,000,000 of RPO to Oracle announced a couple of months ago
Speaker 2: Yeah.
Speaker 10: At reasonable margins. So, you know, I I think time will tell on that decision. At Summit, I'll just generally speaking, you're gonna come to us and get a pretty bullish take. We are way ahead of the market on that Oracle print last quarter.
Speaker 1: And Let's give it up for bullish takes.
Speaker 10: Patch it big enough.
Speaker 2: Yeah. No. That makes sense. How does, with with fleet fungibility when you see, you know, on ClusterMax, there is Azure, and then there's also a bunch of that have partnerships with Microsoft like Nebius. Yeah. How does that actually work if you're a lab and you go to Azure and you say, I want GPUs from Azure and then Satya says, oh, I have some here for Nebius. And then dealing with, like, you know, the intrinsic details of this Part
Speaker 10: of of me workloads.
Speaker 1: Yeah. Part of me though part of me though is, like, like, who do you think is gonna be able to make the better call? Microsoft and Satya or Oracle when it comes to capitalizing on all that demand from OpenAI. Right? Who has more information? Who has more experience, you know, working with OpenAI? Who has, you know, handling these, you know, training runs? It's like I don't wanna I don't yeah. Again, time will tell. Right? It could have been like the most brilliant move ever from Oracle. But at the same time, I have to imagine Oracle is like, basically we we had a graph last week on on kind of understanding where the where the AI players sit on this idea of, like, how much they need, like, AGI versus don't need AGI and how much they believe in AGI and how much they don't. And we were putting, Larry, like, way up in the left corner of, like, needs AGI but doesn't necessarily Believe or
Speaker 2: or hasn't hasn't issued public statements about
Speaker 1: Hasn't issued public statements at least. But, I just feel like very much like a a hail Mary from hail Mary from Oracle. And if they if if the depreciation schedules are quite a bit different than what they're planning, they will OpenAI will have gotten the much better end of that deal. And I think Satya could end up looking pretty smart for not going after it. But who knows? Yeah.
Speaker 10: I I mean, I tend to agree. I don't think Satya's looked dumb very many times in his career. I think he gives very thoughtful answers when it comes to this stuff. I do think, in some respect, it's a matter of timing. Like, if you look at what Microsoft's build out looks like right now, they are accelerating, and that's on the back of OpenAI, but also some of their SaaS products that are using this technology. They have access to all of OpenAI's IP through 2032. So they're certainly exposed to any upside on on that side. And, yeah, it's it's a matter of when you wanna take your free cash flow down to zero to do a build out. And Oracle seems to be the first one to have moved while Microsoft flinched a little bit. And then
Speaker 1: It's a game of chicken. It's a game of chicken. Hyperscale chicken. What what did you think about feel free to give your your personal views. You don't have to share the official official point of view of semi analysis unless you want to. But backstop gate last week, I think, you guys had probably been privy to conversations or heard, you know, it's not the not the first time that that the idea has been floated around. I mean, we're seeing sovereign AI in other countries. I'm assuming we'll see some version of it here. But, what's your what do you rate the likelihood that we could see a scenario where the the government would effectively be guaranteeing loans for, AI, you know, data center build outs?
Speaker 10: Yeah. I don't know how much, I have an opinion on that. It it when I read into it, it seemed like they weren't actually asking for a backstop or a bailout. The
Speaker 1: Yeah. And I I agree I agree with you there. I'm not I I well, my our our understanding of it was it wasn't asking for a backstop on OpenAI or or any type of bailout. It was just like the language implied that they would be open to a world where data centers would be treated like some other categories and that around national security, like manufacturing, where the government would basically wanna encourage the development of of data centers by saying, like, we're gonna guarantee these loans, which would just encourage lenders to have more confidence to lend against assets that might depreciate faster than people think and and present some risk.
Speaker 10: Yeah. And I I 100% agree in terms of guarantees. We see this elsewhere in the world quite substantially where there's governments in The Middle East, Norway, Canada to a small extent, Indonesia, Korea, Japan, India, they're all, like, getting in on building a Neo Cloud. And, the US government has not gone out and built a Neo Cloud, but they certainly build supercomputers for defense, intelligence, research. And they certainly have a vibrant community of companies. So I think there's more that the government can do to accelerate the time in which you can build a data center behind the meter, which a lot of places are trying to do right now without interconnect. Semi analysis, we've got a energy model coming soon that's sort of a companion to our data center model. It's really exciting. Got guys on the team working on that who were grid operators in their previous job. And they've got a lot to say about the lead times to get turbines right now or how much red tape there is to get nuclear, wind, solar online, how you need guarantees from the utility even if you're behind the meter, as some sort of, you know, future interconnect agreement or some sort of stabilizing power load or I mean, there's there's just so many creative things that could be done with the grid in The US that I think are kind of fundamental before you can have discussions about financing a lot of stuff because, look, when we get in conversations with investors, I think there's a whole bunch of capital to be unlocked in the private equity space that is just getting started. And it feels like early innings because VCs are the first ones to move, and they have in the neo cloud space.
Speaker 2: Mhmm.
Speaker 10: But we haven't seen a lot of, like, institutional players decide to make a make a bet on this the way they have with energy, with housing, with commodities. And if GPUs are gonna be fundamental and they're gonna be a commodity and people wanna bring, like, an h 100 index to the market, you're gonna see a lot of, like, institutional capital try and pour its way in here. Yeah. And, like, functionally, how does that work? I think it actually comes in terms of, like, accelerating the existing data center projects as opposed to having The US just start to build something new.
Speaker 2: Oh, sure.
Speaker 10: So to that extent, like, totally agree with everything. I think the market kind of shuddered at those comments because it seemed like they were asking OpenAI is asking US government to pick a winner when that is clearly not the case. Like, there are winners and losers being shaked shaken out in, like, chat, research, coding. Yeah. But we're in the early innings of video and image generation, of material science, of drug discovery, of weather prediction, all of these models that are showing incredible early results from transformer based architectures running on the same GPUs as the chat models that could be the, you know, lion's share of compute in the future if suddenly everybody wants real time generated video on their feed or or making small weather predictions for everybody who's getting ready to go skiing or something.
Speaker 1: Like Yeah. Fascinating. I would like to see more AI on the slopes.
Speaker 2: On the slopes.
Speaker 1: I like the sound.
Speaker 2: Like the to Vail.
Speaker 1: Semi analysis can do a ski trip, invite all the different, players in, bringing AI to, the mountains. That's fascinating. Please invite us.
Speaker 2: Yeah. Very excited for the energy model. Please. Call it powder max. Whoever's working on that, as soon as you launch energy max, we wanna talk to you guys. We always enjoy talking to everyone at Semi Analysis. Such thorough research, such, deep insights. We really appreciate
Speaker 1: it. You take time. Yes. Chads. Great to see you, Jordan. Thanks for coming on. We'll talk to soon.
Speaker 10: See you guys too. Yeah. You're welcome to Whistler anytime.
Speaker 2: Amazing. Amazing. Let me tell you about adquick.com. Out of home advertising, easy and measurable say goodbye to the headaches of out of home advertising. Only ad quick combines technology, out of home expertise, and data to enable efficient, seamless ad buying across the globe. Our next guest is Isaiah Taylor from Valar Atomics. Congratulations, Isaiah. Woah. How are you doing?
Speaker 1: Look at
Speaker 2: this backdrop. You look fantastic. Congratulations. Second time on the show. Give us give us the story.