SemiAnalysis's Jordan Nanos on hyperscaler IT asset life extension, GPU depreciation, and Microsoft's Azure strategy
Nov 12, 2025 · Full transcript · This transcript is auto-generated and may contain errors.
Featuring Jordan Nanos
Speaker 1: Break it down. Give us some notes. Yeah.
Speaker 5: No. I don't think you guys got anything wrong, but I think just wanted to promote that we we put out a companion article that comes out with the interview
Speaker 1: Oh, fantastic.
Speaker 5: Was in there. He was he's been called out by some other people on Twitter for hunting around in the background while they're interviewing the Azure teams.
Speaker 1: Wait. Dylan Patel, you mean? Yeah. Yeah. Oh, we we I we gotta go through frame by frame and find him. Like, oh, he's gonna go look. Because he's wait. If we actually break down what what does a Dylan Patel scoop look like when you're on-site and security turns their back? Is he, like, looking for, like, oh, they're instead of this NVIDIA switch, they're using AMDs. I've seen him call stuff out where it's like, wait a minute. This company is using a different company's technology. Is that what he's looking for?
Speaker 5: Yeah. I mean, I don't know if he was looking for the Credo cables or anything. Those those are obvious with the purple housing. But if you look at his last tweet, he just said, to be clear, while Scott was explaining the data center Mhmm. He was super intently focused on figuring out the fiber patch panel config. We know about some we know about some providers that have gone out and bought cables, like 20,000, 40,000 cables, and then they they mess up the config of the patch panel, and they end up five feet too short. And they just have to send them back and get all through cable,
Speaker 1: which is, like Oh.
Speaker 5: Not a negligible amount of money. These these cables are, like, at least a thousand bucks a piece. Yeah. They're really heavy. We've been talking about internally how many cables can you bench.
Speaker 1: Okay. Okay. There we go. There we go. Yeah. Dylan Patel seems like a bit of the fox who in Microsoft is the henhouse, and they really shouldn't have let, Dylan Patel into the into the new data center because you know he's gonna find some stuff. There's gonna be some scoops that, like I'm sure there's, a PR team at Microsoft that's like, okay. Like, here are the talking points. Like, we're gonna keep him here. He's not gonna look behind this curtain. Like, there's a bodybuilding over there. Like, don't go in the don't go in that closet. And and Dylan's, I'm sure, on the case probably, the they they you know, quickly becoming the greatest investigative journalist of our of our generation.
Speaker 2: What what like The big pop. Give us some more insight on, the depreciation comments.
Speaker 5: Yeah. Definitely. So there's a section of the article, that kinda goes through this. I think you guys asked me this on Monday. I didn't have a properly prepared answer because I actually didn't know what Michael Burry had been tweeting about and and kind of talking about with this. So maybe for background, Burry is claiming that the hyperscalers, like, including Meta, but also Azure Yeah. On Oracle, Google, are artificially boosting their earnings by extending the useful life of the IT assets.
Speaker 6: Mhmm.
Speaker 5: So you can see from 2020 when they would report their numbers, these IT assets like servers, switches, storage would be three to five years
Speaker 1: Yeah.
Speaker 5: On a life cycle, and they've now extended that to five, five and a half, or six all the way across the board. In some ways, I think this is a bit of a, you know, game of catch up. If one provider does that, everybody else has to follow suit.
Speaker 4: Sure.
Speaker 5: But we go into detail on the article about, like, what this actually means. And, you know, Burry's argument that this is understating the amount of or or it's boosting their earnings. It's understating the amount of of depreciation that's going on on these GPUs is really predicated on NVIDIA's comments that the product cycle is now two to three years long. Right?
Speaker 1: Sure. Sure.
Speaker 5: And I think that is is, like
Speaker 1: But that's on that's on the development of a new chip. Like, from my perspective, depreciation although there is the economic question about how much how much value can you get out of the tokens that you sell generated by an a 100 in five years? That's a good question, but that's not how we think about depreciation. Like, if you buy a mechanical arm to put a glass windshield on a car at a automotive factory, you're just wondering how long until that mechanical arm breaks. And and I'm sure there's been immense pressure on NVIDIA to let the chips not burn out in two years. Like, they they, you know, they probably have done a lot of work. It's a very expensive chip. It's a very expensive rack. Like, is it that crazy to assume that that the fiftieth percentile might be five years now?
Speaker 5: No. And there's basically no precedent to say that a chip would fail or wear out as you're describing it in two to three years. Yeah. Like, there's the o the hardware OEMs, they have contracts that are standard for three to five years, and they offer extended warranties for six and seven years.
Speaker 1: Mhmm.
Speaker 5: The big supercomputers in the world that run as a total system or even individual servers kind of get life cycled Mhmm. They run for five, six, seven, some of them up to ten years
Speaker 1: Mhmm.
Speaker 5: In in production. Right? Not to say the years that it takes to actually turn this thing on. And and these are the environments that use liquid cooling and some of the latest and greatest chips that are actually, you know, similar in comparison to the g b 300. And then even if you go to the providers directly and you try to rent a v 100 today, which was launched in 2017, right, seven, eight years ago, I can still rent v one hundreds in data centers from providers like Amazon. Mean, there's plenty of a one hundreds for sale right now. So there's there's nothing to me, you know, the the the proof of this argument would be predicated on NVIDIA releasing chips that so drastically outperform the current generation in two to three years that all hyperscalers everywhere are so incentivized to go through another CapEx cycle. They gotta buy all new chips and rip out all the existing ones, and we're still so power constrained that they have to do all that. And that seems like a much farther leap than saying, we might be able to run these chips for five or six years in the data centers themselves.
Speaker 1: I I wonder do you have a reaction so Ben Thompson's been writing about the AI build out and the bubble potentially, the benefits of a bubble. And one of his, like, bull cases for a bubble was basically that, in previous bubbles, you get a glut of, IT infrastructure or or even just, you know, steam engines or railroads. But his example was dark fiber. And the idea was if you overbuild, well, then you get a bunch of extra infrastructure that you can use, and it's cheap. And his his, like, counter to the AI build out was that, depreciated h one hundreds just aren't as valuable as fully depreciated, you know, fiber lines. But I just don't know if that's true. I feel like if there's a ton of depreciated h one hundreds out there in ten years from now, I think there's still something useful that you can do with that in the same way that you can push bits across fiber lines. It just it won't be superintelligence, but there will be a base load of, you know, generic knowledge retrieval or just, hey. People just wanna chat. And the fact that it's so cheap now because the assets are fully depreciated means that you can just have a chatbot in interface every single conversation in every app, and it'll just be so much it's sort of like a Jevons paradox thing. I'm not saying that the economics will stay the same. The margins will go way down, but there will be a benefit. It's not like these things just disappear after five years. They don't just break.
Speaker 5: Yeah. No. They they they don't just break. I think, there are some new your comparison makes sense, but there are some nuances compared to the fiber line, just to say that if new GPUs come along from other providers that are so much more performant than the current ones, then at some level, it doesn't make sense to keep the power turned on for the old one because the operating expense is too much. Like, we estimate this at 30¢ per kilowatt. So if the bottom of the h 100 price gets below 30¢ or somewhere close to it, you're probably better ripping them out and replacing them with something new even if, you know, for the car analogy, this is still a beater that somebody else could drive for the price of gas and insurance. Right?
Speaker 1: Yep. Yeah. Yeah. I I was talking to Jordy about it, and I was saying, like, there is a world where, to just put in really concrete terms that most business people might understand, just, you know, if your job is you know, you're you have a Diet Coke business, you deliver Diet Cokes, you have a you have a you have a gas car that delivers the Coca Cola, and you have the opportunity to switch to an electric vehicle, you you might do that because the low bringing down your total cost of ownership, your annual OpEx would be great, but sometimes companies just don't want to spend the CapEx to actually do that. And so I'm wondering if there's a world where the GPUs get traded down to a point where people are like, yeah. The OpEx is higher, but I still I'm still running it for this niche use case. Like, there are still mainframes that are running. There are still on premise cloud. There'd be there's there's companies that haven't fully moved to the cloud, and I'm just wondering how long, some of these, like, GPU racks might just be sitting around where someone gets it cheap, and they're just like, yeah. Like, that's the thing that filters every invoice that we get, and it just runs and it it runs like, it runs grok two on it or whatever, and it's fine. It's good enough.
Speaker 5: Yeah. Well, I mean, look. I I put in the article a link to Azure's announcement from September kind of pleading with their users to move workloads off of v 100 GPUs that are eight years old. So
Speaker 1: Please. It doesn't yeah.
Speaker 5: Please don't get upset at us when we turn off this instance and you, you know I I don't think they're doing payroll on these things, but, like, things gonna shut off. So Interesting. I I mean, I totally take your point. Working in IT for years, there's there's so many people that come along and when there's, like, some sort of last call for the sale of spare parts for these old GPUs and they wanna keep these systems running, they they suddenly show up and wanna order a bunch of them. Yeah. And it could be for all sorts of reasons, but, usually, it's just inertia of not wanting to move the old workloads onto the new stuff.
Speaker 1: We're also completely discounting the nostalgia market. I mean, people like air cooled Porsches. I imagine that that Dylan and you guys are gonna be like, yeah. I need a v 100 just, you know, sitting in the office. Like, I don't like this new technology, this new GV 100.
Speaker 5: We gotta you wanna talk nostalgia, we gotta go back to Kepler or pastel, you know,
Speaker 1: way before
Speaker 5: the guys. I don't know.
Speaker 1: Yeah. Yeah. Yeah.
Speaker 2: Way before the. What what about on the what about on the chip side? What what was most notable about Satya's comments there?
Speaker 5: Yeah. I mean, I find it fascinating that Satya clarified they have access to all IP, including chips and including systems. So Yeah. This is on the models that everybody jumps to. I think OpenAI has a ton of IP that's going into their chip program that Azure could have take and sell as part of the cloud services if they wanna partner up with Broadcom and just produce a bunch of chips that are, you know, more similar to a TPU than they are to a GPU in the future. Yeah. They've got that optionality.
Speaker 1: I wonder if there's a way to puzzle around that, though. Like, when I was debating with Jordy, I was saying, like, well, you know, obviously, OpenAI has a team that works with NVIDIA and with Broadcom and with, you know, maybe Intel in the future, AMD now. Is there a world where the IP lives with the other companies for the next five years or something? It seems like maybe tricky, but I'm just wondering, like, how much of the IP actually lives within within OpenAI versus the their partners?
Speaker 5: Well, I I think it's, so Satya made the point in the podcast that in some cases, Microsoft seeded parts of the OpenAI chip program with IP from their Maya program.
Speaker 1: Oh, sure.
Speaker 5: So I think it's this whole mosaic of where the actual IP sits.
Speaker 1: Okay.
Speaker 5: But at the of the day, there's gonna be something that comes with OpenAI that they learn. And you can even go beyond the chips to the systems that the chips go into, to the network that connects the chips, or to the software, like the runtime and the, like, inference stack, which is to say how they run inference efficiently on these chips. There's a whole stack that gets built from the actual hardware through to the API that produces tokens to the user in the chatbot, and they have access to all of that that OpenAI would consider proprietary. So I maybe to the point you guys were making earlier comparing to Excel or to Microsoft's other strategies, it it seems like this is a case of OpenAI going really fast and executing really quickly and their moat being speed.
Speaker 1: Mhmm.
Speaker 5: And then you compare that to Microsoft who has an existing moat of distribution.
Speaker 1: Mhmm.
Speaker 5: So they've got 400 data centers in Azure, 70 regions Free
Speaker 2: cash flow free cash flow is also a bit of a moat too.
Speaker 5: Yeah. Definitely. They've got all the makings for, like, you know, who
Speaker 1: who do you who do
Speaker 5: you think wins the race to turning on public instances of an OpenAI chip that anybody can rent. Right? OpenAI or Microsoft Azure with a bit of a head start? Like, I I think they have some good optionality there with with OpenAI's program, with the relationship with NVIDIA, with Maya. But the point is that right now, they are losing the race on chips to NVIDIA, the CPU from Google
Speaker 1: Mhmm.
Speaker 5: Maybe AMD. They use some AMD GPUs right now. So they've gotta go and execute.
Speaker 1: Mhmm.
Speaker 5: Right?
Speaker 2: Whether or Satya do you think Satya expects Azure to compete with an OpenAI cloud in the next few years?
Speaker 5: I think if OpenAI really develops a cloud, absolutely. Because because Satya says in that article that they well, in in the interview that they don't wanna be beholden to one customer. Right? I think he says, like, we have five big deals with five customers, and that's, like, the bulk of Azure's compute right now. And and I would say it's really one deal with one customer. That is the bulk of the incremental revenue in Azure right now. So, like, I'm I mean, I go hands on testing this stuff, and I've talked to a 140 different companies about actually using Azure. And they were a a distant third compared to just AWS and Google from a hyperscaler perspective. And I I talked to Clem from Hugging Face this morning, and he shared some data with us that backs this up. Like, if you look at the downloads from Hugging Face of open source models, the downloads that originate from an Azure IP, there's more than five times less when compared to AW.
Speaker 1: Wow.
Speaker 5: Right? OpenAI can use it all with their private repos and stuff. But if you look at the the long tail of the market where the long tail represents everybody but OpenAI, basically, They're getting five times less of that business right now because they're so focused on OpenAI. So I think Satya wants to do more than just OpenAI. It's just a matter of actually going and doing it.
Speaker 1: Yeah. Do we have more of a clear narrative now on what motivated the pause? Like, you can see it so clearly in this data center preleased capacity in the seminalysis article. It's so obvious what's happening from 2023. You see the blue bars just increasing, increasing, and then just complete flatlining. Everyone else is growing. It seems like this it seems like this this, this piece, this, video with Dorkash and Dylan feels a little bit like him maybe teasing like, hey. I'm getting back in the race. Like, the headline with The Wall Street Journal is like, I got the biggest data center. That feels like I'm unpausing. But but hey. Right? You're you're not like, oh, I'm happy to be a lease, or, and, by the way, I have the biggest thing. Actually, I own the biggest Like, that's kind of a it's kind of an odd dichotomy there. And I'm wondering if there's a if there's a clear narrative. Was it specifically, like, the nature of the OpenAI relationship that that led to the pause, or was it just concern about the overall market or the viability of the technology, or was he, like, plateau pilled and had seen the GPT 4.5 wasn't getting adoption too expensive to serve? Like like, do we have a narrative for, like, what motivated the pause yet?
Speaker 5: I I don't know if it's clear. I think we have different pieces that we can piece together. The the things that Satya says in the interview point towards fungibility Mhmm. Diversification outside of OpenAI. Yeah. A lack of foresight in two years ago to what the actual demand would be. Mhmm. They they like, fishtail where they went too far on building demand
Speaker 1: Mhmm.
Speaker 8: And then
Speaker 5: they wanted to come back, and now they didn't have enough. And so now they're renting from neo clouds like Mhmm. Navias, like Lambda, I ran Yeah. Oracle directly. They've got Enscaler deals. I mean, they they're clearly using neo clouds to take on that extra bit of risk. And going forward, I I mean, we are I think the result is that it's incredibly bullish for Microsoft actually serving this demand when they have the first right of refusal
Speaker 2: Yeah.
Speaker 5: And OpenAI's growth continues to go crazy, and they they are kind of back on track to leading the way with coding models in a way that they were, you know, maybe forecasted not to with Anthropic coming and taking a bunch of share there. They are, like, all in on taking down Anthropic as a direct competitor with their codex models now. Mhmm. So I think they just kinda Microsoft just flinched a bit. Yeah. And they're they're back on track, and we're pretty I mean, we're pretty bullish on them growing in the future, but it's notable that Satya talks about things like diversification around OpenAI and around globally. Like, the idea to me that there's a there's a geopolitical conclusion that you can draw from his response on the the question about the pause where he says, what if I wanna build in India? What do I wanna build in Europe? What if I wanna build, you know, elsewhere? And, therefore, we pause North Carolina. That seems to say that governments around the world can attract investment in data centers if they implement a bunch of harsh regulations on data privacy.
Speaker 1: Yeah.
Speaker 5: And more and more companies are gonna wanna train models based on personally identifiable information or other types of data that's regulated and needs to stay in those countries, and therefore, you need GPUs around the world. You know, that could be something that they just kind of learned and and therefore pause North Carolina, spin up something else.
Speaker 1: Yeah.
Speaker 5: Haven't seen that exactly play out on the global deployments, but it could be coming.
Speaker 1: How how real is this fungibility of the fleet thing? Because, the the narrative around Google is that they have a TPU. They're so vertically integrated. They DeepMind and and the the it just feels like when you go to Gemini, you get something that's, like, down to the metal. And then at at Microsoft, it's like, pick your model. And then, also, they're they're subleasing from all these other clouds. But as as I see from ClusterMax, like, there is a wide variety of performance metrics and results and qualitative, like, even, like, the feel of these different NeoClouds. And so is there is there some sort of risk to you go to Azure and they put you on the the lowest ranked ClusterMax Neo Cloud, and it's like, I would rather be up here on the the top tier lease. I wanna be I wanna be sub I wanna be subcontracted down to the platinum tier, not the d tier. Is there any risk to that, or or or is am is Microsoft actually able to take a Neo Cloud that is in the d tier and and give you the Azure level of service?
Speaker 5: Yeah. I I think it really depends on at what level you're assessing the fungibility. So I think in some ways, the tokens or the tokens produced by the model endpoints are almost completely fungible.
Speaker 1: Okay.
Speaker 5: You know, you can find it in the models in private scenarios. But generally speaking, like, tokens from one provider versus tokens from another provider are effectively the exact same
Speaker 6: Mhmm.
Speaker 5: As long as you're, you know, hitting the, like, quality metric you need from that model deployment on that GPU.
Speaker 1: Yeah. Yeah.
Speaker 5: But if you look at the underlying architecture, like, when Azure claims to add a 100,000 g v three hundreds coming along online this quarter, I mean, that is not fungible to every single user on a cloud service. Like, g v three hundreds are rented 72 GPUs at a time and a 135 kilowatt rack. Your workload needs to be ported to ARM because it's a grace CPU. Mhmm. You need to have the performance per dollar benefits of the Blackwell GPU to justify it over the previous generation hopper. It needs to use a GPU in the first place. So I just spent a bunch of time in the article talking about the importance of, you know, in in the in the interview talking about the importance of databases of, like, you know, non GPU related services that actually run the web app, that actually store the data. Right?
Speaker 2: Turbo Puff.
Speaker 5: There's lots of Azure capacity beyond GPUs coming online too. Right? Yeah. You know, I call them CPU data centers now, and they consume a little bit less.
Speaker 1: Yeah. Cup Cool.
Speaker 2: I wanted your take on a couple other things. One, the new Anthropic news, $50,000,000,000 investment. Any any any reactions to that?
Speaker 5: Yeah. I think notable that they called out FluidStack by name, and they're not calling out the underlying providers, as you said. Like, you know, if if somebody like FluidStack can help them deliver on a a bunch of what we think is TPUs deployed directly to first party, you know, then there's, I mean, there's all sorts of stuff that they can do because under the hood, it's it's pretty clear that FluidStack is deploying these TPUs in TerraWolf data centers in Buffalo, their Lake Mariner facility, and and elsewhere in Texas. And then at Cypher Mining in Texas. If you look at some of the you know, you you gotta pair two news press releases together. Mhmm. The one from the underlying provider and and FluidStack and then the one from Anthropic with FluidStack. But the point is that Anthropic is is definitely ramping up just at the same level as OpenAI, and maybe not quite at the same level, but they they both seem to be clearly believing that that statement from Greg Brockman. Like, if we had 10 x more compute right now, we'd have 10 x more revenue. Mhmm. And so, therefore, their constraint is compute. Bring it online and just keep growing.
Speaker 1: That was certainly the case with the early iPhone launch. Like, the iPhone, like, Apple's earnings were extremely predictable every quarter because they knew they they were like, as many as we make, we will sell. And so they were completely supply constrained. I I it does feel like on the on the question of backlogs, Anthropic has just been much less aggressive. Like, they have been somewhat saying the biggest number every once in a while, but they haven't gotten into the trillions of backlog. And it feels like OpenAI Sam's almost like, oh, yeah. Sure. Like, give me all the weight of Stargate even though that's sort of a separate entity. And he could have easily fended that off and been like, well, that's Stargate. That's sort of a separate thing. That's not all on OpenAI. Like, we need to clarify what like, how this all fits together. But Anthropic's been a little bit quieter on the on the, like, okay. We have this crazy RPO that's going on all all over the tech industry.
Speaker 2: What was your reaction to Core Weave's quarter? Jim Kramer was going pretty hard.
Speaker 1: What did what did Kramer say?
Speaker 2: He he was just like
Speaker 1: He was blackmailing?
Speaker 2: It was yeah. Was just a funny interview. He was like, what's going on? What's going on?
Speaker 1: He's like, doubled the they doubled revenue, but they sold off. Is that
Speaker 2: what Yeah. Well, I mean, they sold us a ton in the last month.
Speaker 1: Yeah.
Speaker 2: I think, like, 20 about a yeah. 26%, something like that. But Kramer was saying like, why are you relying on a Bitcoin miner to like help fill your capacity? Are you sure they can deliver? And he was talking about Core Scientific, which Core we've attempted to acquire. It got rejected. Yeah. Yeah. But, I know it was Core response was yesterday.
Speaker 1: Platinum on cluster bags. That's all I need to know. Buy and hold. Get lost, creator.
Speaker 2: Yeah. And and the CEO was saying, like, yeah. And, like, he didn't name Semi Analysis because I think the CNBC audience is maybe not familiar yet. But he was saying, like, we're platinum rated. We're platinum rated.
Speaker 1: Oh, you said platinum rated? Yeah. Oh, that's amazing.
Speaker 2: He's like
Speaker 1: I love it.
Speaker 5: Yeah. I mean, it's in it's in the earnings calls. I mean, Alice has mentioned, I think, three times. There you go. But that's definitely yeah. We we went into this the
Speaker 7: We're very
Speaker 1: excited for this. This is big.
Speaker 5: Yeah. I think we like, we went into this Jeremy and the data center guys, Rake, Dan, they put out a note earlier in the day to the Core Research subscribers that the the Core Research is at risk of short term delays in their CapEx, but not to their revenue, which is driven by the data center partner Core Scientific.
Speaker 3: They Mhmm.
Speaker 5: So, yeah, when you're relying on somebody to add 250 megawatts and, you know, they're only gonna get you a 150 by the end of the year, that changes your guidance on CapEx.
Speaker 1: And Sure.
Speaker 5: I think the, you know, the this is like yeah. Look. This is the the reality of the industry right now, which is that when projects are so big, measured in the hundreds of megawatts, small delays or small changes in a plan can really impact, like, short term financial guidance that that people have in place. Mhmm. I don't like, personally, I don't think this changes any of the fundamentals of, like, CoreWeave's engineering or a lot of the, like, experience and and what customers they have signed. But we saw the same like, a very similar reaction when when the information put on an article about Oracle
Speaker 1: Yeah.
Speaker 5: Having a a few delays. Right?
Speaker 1: Yeah.
Speaker 3: Yeah.
Speaker 5: And, you know, I think people are people who don't know about the data center industry or or things like this, they they see things like, oh, yeah. These these GPUs take a month to come online after they get installed, they're starting to depreciate the asset, but they're not generating revenue yet. Like, what's going on? It's like, yeah, that's that's typical. I mean, you Yeah.
Speaker 2: That's completely standard. That's business.
Speaker 5: Yeah. It's stuff takes I mean, you know, maybe US federal government is is different, but in my previous job, we we try to hand over these supercomputers to the US federal government. It take, a year and a half to pass acceptance.
Speaker 1: But at the same time, if if investors aren't aware of that and then they learn that for the first time, they could be surprised. And so that could be something of what we're seeing in the gyrations in the public markets,
Speaker 3: I suppose.
Speaker 5: It makes sense. And and there was a I think there was a broad Yeah. Recognition that the Bitcoin miners have an uphill battle to figure out how to run these facilities when compared to established players like an Equinix or a digital realty or, you know, Switch or some somebody that's, like, has been running tier three data centers for a long time. Yeah. It's just different.
Speaker 1: Give us an update on the energy model that's coming. What can you tell us? Is it this year, next year? Is it my Christmas present? How does this work?
Speaker 5: There's some previews. I think you should, you should have a Jay on the on the program next time or bring Jeremy back.
Speaker 1: Yeah. Yeah. I would love to have both of talk about it.
Speaker 5: Yeah. I
Speaker 1: I I'm specifically I'm very, very interested in, you know, the just American energy forecast for the next couple years, because it feels like the entire industry is sort of hinging on, like, the trend changing, in my opinion. It feels like we're we're we need to bring up the level of energy production for everything else to happen. And, whether we hit some sort of regulatory block or some massive scale block or a capital block or a glut of some sort or some gyration correction, there's so many things that could go wrong, but I'd love to know what the forecast actually is from the folks who have gone so much deeper.
Speaker 5: I'll give you the quick the quick hit. So, everybody's trying to do turbines behind the meter right now, not gas. Right? Mhmm. And you've got a number of different players. But if you look at Schneider, Vertiv, Bloom, one of them, Bloom, is ramping up they're first of all, they're all sold out. So the model will cover, like, how sold out and exactly, you know, exactly how sold out. But, yeah, you you wanna look at Bloom ramping up production, and you wanna get Schneider and Vertiv on here and try and figure out exactly why they're not. Let's let's see, you know, what does it take to get those guys to recognize what everybody else in the industry is recognizing right now. And then and then let's dig into solar, wind, nuclear, hydro, all these alternatives that, you know, you'd like to you'd like to have in there. Right? Why why is Elon putting power plants on the other side of the border and then piping the electricity back to his data center in Memphis because of, you know, short term regulatory approvals.
Speaker 7: Like
Speaker 5: Yeah. There's all of that political stuff that goes on to influence this market.
Speaker 1: I'm I'm so excited for it.
Speaker 2: Well, thank you for coming on
Speaker 1: Yeah.
Speaker 2: Again twice in a week. We'll see what happens. We
Speaker 1: really appreciate
Speaker 2: you taking time. For
Speaker 1: a three peat. If you're if you're not already subscribed, what are you doing? Head over to semianalysis.com. Buy the most expensive subscription you can find, especially if you're a venture capitalist. You can't call yourself a VC in the age of AI if you're not paying semi analysis the big bucks.
Speaker 2: Demand it. Couple $100,000 per month.
Speaker 1: Per month. I I demand it.
Speaker 2: Anyways, great. Nice to see you, George. Thanks for coming on.
Speaker 1: We'll see you soon.
Speaker 4: Take care.
Speaker 1: Bye. You mentioned Turbo Puffer. I'll tell you about Turbo Puffer. Search everybody. The Puff. Serverless vector and full text search built from first principles and objects. Storage is fast, 10x cheaper, and extremely scalable. The soundboard
Speaker 2: is doing very well. Leaked their revenue recently, and I was Turbo Puffer. It's quite shocking.
Speaker 1: Well, we have a shocking guest. Brian Halligan from HubSpot joining us. He's the cofounder of HubSpot. Welcome to the stream, Brian. Are you doing?
Speaker 2: Sorry to keep you waiting. Welcome to the show. Never been better.
Speaker 7: How are you guys doing?
Speaker 1: We're doing fantastic. What is the what is the latest in your world? Have you been captured by the AI build out? Is this something that's keeping you up at night, or are you, watching it more as an observer, or are you checked out entirely? Where are you on the the how much does it get your pulse up?