Karman Industries launches HPU to cut data center cooling energy by 25–100% using supercritical CO₂
Jan 14, 2026 · Full transcript · This transcript is auto-generated and may contain errors.
Featuring David Tearse
Public.com investing for those who take it seriously. We got stocks, options, bonds, crypto, treasuries, and more with great customer service. Our next guest is David Tur from Carmen Industries as well. He's coming in. I'm going to tell you about [music] Cognition. They're the makers of Devon, the AI software engineer. Crush your backlog with your personal AI engineering team. Good to see you. Welcome to the show. First time on the show.
First time on the show.
Uh thank you so much for coming in person. Uh please kick us off with an introduction on yourself and the company.
Yeah, David Turus, uh co-founder and CEO of Karman Industries. Uh we actually launched today our heat processing unit. So an HPU and this is a cooling system designed for gigawatt scale AI factories.
Okay. cooling the entire data center, the individual chip, both what do you
all so typically a lot of these are cooled with a similar loop because you've got high high memory bandwidth, you've got GPUs, you've got air handling, all that type of stuff.
All of this is about reducing the complexity of the infrastructure. Okay,
so our overall solution is actually a 10 megawatt modularized solution and that's very different than the 1 to 2 megawatt ones from the existing solutions out there.
And we reduce that mechanical yard by about 80%. Mhm.
So your speed to deployment increases rapidly with this.
So uh heat is generated as they put electricity through the silicon ship and old Facebook data center. Uh they would just open the window and blow it out with fans I assume you know air cooled systems. Now we're seeing a lot of uh water cooling. Uh walk me through like try and uh explain what is the actual flow of where the heat goes how it's moved around.
Yeah absolutely. So pretty much every single electron that goes into a day center gets turned into heat. Yeah.
Comes right into these chips. They generate a huge amount of heat. So now when we're talking gigawatt scale, this is a gigawatt worth of heat.
Yeah. We talked about there's a there's a bath house in New York called Bath House. And they they originally would heat the the the bass with bit via Bitcoin mining.
There was so much heat coming off of it.
Yeah. And racks used to be kind of in this like 5 to 20 kilowatt per rack. Now we've jumped up to 120 kilowatt per rack. The Vera Rubin coming will be 600 kows per rack and they're trending towards a megawatt in a rack.
So when you think of how much power that is, a megawatt's a thousand homes consumption. Yeah.
And putting that in a rack
and that all gets turned into heat. So you have to move that the most efficient way possible. And today 20 to 30% of your energy is going towards cooling.
That is a opex constraint. That is a thermal constraint. And it's just you're not generating revenue with that. Yeah. So we're trying to shift that and we reduce that at a minimum of by 25% energy consumption on the cooling side. Yeah. Now you can have more headroom that you're using towards compute.
Yeah. And what's the what's the material? What's the substrate for extracting the heat, air, water, something else?
Yeah. So we interface with the existing systems that are there. So we we integrate with the glycol loops that go to the chip level to the CDU side of it. Okay.
But inside our actual system, we using super critical CO2. So, we're running a high-speed compressor that was developed in-house by our team. We have a huge amount of history from SpaceX Rocket Lab. About a third of our team comes from there. And we've really designed this from the ground up. Our first one we came up and running in 8 months with a fourperson team.
Ran it at 30,000 RPM. Uh the next system will be coming together in Q2 here. And yeah, we're using super critical CO2 due to the high density of the
energy density. So, it can absorb the heat. It absorbs the energy. It heats up and then you move it somewhere else.
Yeah, we remove it. We we vent it to ambient. But why we're calling this a heat processing unit is because
today we view heat as a liability and as a constraint,
but heat is energy and it's an asset. And so we're building the the platform that allows you to remove that heat from your system as efficiently as possible. Y
and it opens the door for future use cases of heat reuse.
Sure. Yeah. I want to talk to you about that because I've seen some of the hyperscalers have put out statements saying that uh there is waste heat from our data centers, but we're going to be able to pipe it into the local community, basically give everyone free heat in their homes. And that sounds really good as long as I guess it's clean air. Um but it seems like a good use. What does it take to actually go from a data center in the middle of nowhere to a data center that's closer to a bunch of homes to get the energy, get the waste heat there, and maybe just warm up someone's house during the winter? Yeah, I think Europe will be more ahead of us with than us just due to the proximity of those data centers to urban locations just smaller, smaller countries, right?
So where we see also a great opportunity is to do waste heat back to electricity.
Oh, interesting.
And actually supply that back to the data center. So in your colder regions at night during the winter, you'll have a big enough delta t where you can actually do that with our system. Okay.
So that becomes now additional headroom that you can use towards compute. Okay. as well as then I kind of half seriously half joke that right now data centers are not in my backyard that's the vibe that everyone has but what if we built communities around these and you have free utilities if you live there
that would be huge people would like we were talking about this yesterday just like if if somebody's you know you hear a data center coming into your backyard anywhere your in your town there's no there's no d for a lot of people there's not going to be direct upside correct just because even if it's not there I can still use all the different AI tools Right. I'm not like limited just because there's no data center even in my state.
So I like that. Uh talk about discovering the opportunity, the journey to getting here.
Uh maybe your background when when we met you were in uh this was probably a couple years at this point. You were in the idea maze looking at a lot of uh different opportunities and uh seems like you found a super exciting one.
Yeah. So kind of going through that is yeah I was an entrepreneur residence at Ride Ventures looking for what I was going to build. Yeah. Shout out Will. And I it it really takes people that are true industry experts, worldclass engineers to develop something new. And I ended up partnering up with my co-founder, Dr. C.J. Kalra, who had spent pretty much his entire career in the thermal energy space. Most recently head of technology at Antora. Prior to that, um, worked at a couple different startups, but started his career at GE Research working on superc critical CO2.
So that's exactly what we're doing here with our system. So he was we when we started working together, it was looking at this broader thermal infrastructure layer.
50% of all end energy use is used for thermal management. We're either heating something up or we're cooling something down.
And we're doing it with outdated technology.
So we're leveraging the tens if not hundreds of billions of dollars that has been spent across turbo machinery in the rocket industry, high-speed power electronics, silicon carbide, permanent magnet motors, and the EV revolution. packaging this all together and bringing it to the most pressing need which is the thermal management of data centers right now. This is a huge problem for them.
Uh how have have you been bouncing all around the map checking like on the ground at at at at various data center projects?
Yeah, a a little bit. Now they're these things sometimes are more heavily guarded than Fort Knox. You know, these guys have machine guns outside of these things. It's hilarious.
Yeah. No, I know. But but once you're uh once you're you know working with somebody and like actually doing a deal, I'm sure the the doors get open.
Yeah, absolutely. And we're going to be launching these actual deployments starting later this year. Uh we're standing up our first factory, Gigaworks 1, which will be a thousand mega or sorry 100 uh units per year of 10 megawatts. This is a gigawatt size capacity right out the gate because this the demand is so extreme compared to where the supply chain is right now.
Yeah. So if if we see one of these data centers, maybe it's 150 megawatts, maybe it's a gigawatt. Um how many heat processing units would they buy from you? Is it a modular system where they might be using some legacy uh system as well and then uh bring you on incrementally or do they need to go all in and and work with you exclusively?
No, we're very much plug-andplay with the existing solutions that are out there. So you could if you have an existing data center with an existing cooling system, we could phase in our system over time. Um but yes, a lot of these are clean sheet designs that are going up right now and right now this is one of the biggest bottlenecks for them for deployments.
Okay. So yeah, there's obviously just a bottleneck of like if you just can't get any sort of cooling uh you know from the market like you you'll just be delayed. Uh talk about the actual economics of someone who goes with you for a new build. Is this higher capex lower opex over time or or how should a CFO who's working on a new data center think about working with you and the trade-offs involved?
Yeah, absolutely. So, our actual capex of the system is the same as existing solutions out there
and the snowballing effect of reduction of size, reduction of infrastructure, redu reduction of piping make this a cheaper solution right out the gate,
faster deployments because time is money on these deployments. as well as then when you actually look at the efficiency of our system. So in your worst conditions, this is you need to cool your chips at 20C. In Texas during the middle of the summer, we have a 25% energy reduction in the cooling system
of the state of the art.
When you start looking Virginia, some of these, you know, more moderate climates, we're looking 60 to 100% uh improvement in performance of this system, which that unlocks huge amounts of headroom.
Yeah. So, it's not as much about the opex savings because we do do that, but what we really unlock is the ability to
allocate more of your electricity towards compute,
which that's that's the money maker in this thing.
Yeah, that makes sense. Um,
talk to me about uh manufacturing a heat processing unit. Are there co-ackers for this thing? Can you go to a turnkey manufacturer once you have a design or a CAD file? you call in Hrien or someone uh or are you gonna vertically integrate and kind of make these in a in a warehouse somewhere?
Yeah, we're going to vertically integrate Okay. a huge amount of this. Yeah. Now, there's certain key components that we have a supply chain for. Sure.
Uh around the motor, around a couple other pieces in there,
tubing and pipe and stuff like that.
So, we're taking very much the the SpaceX approach. People think SpaceX vertically integrated from day one. They didn't. They bought a lot of things off the shelf and then they looked at where they could either make an improvement or bring the cost down. And we're doing that exact same thing.
A big piece of this also is diversity in our supply chain.
We need to make sure that we're not single string in any one of these single components cuz the big guys, the hyperscalers are saying, you know, you need to produce a lot of these and you cannot slow down my deployment. You know, the reason that they're coming to us is because we're speeding up their deployment. So, we're making sure that we have
derisked every piece of our supply chain. our chief operating officer. She came from SpaceX. She was leading all of production at Millennium Space Systems before she joined us. Um, so we have a rockstar team and that's always what Carmen has been about.
Yeah. What what do you think you I mean it sounded like you wanted to have these in production pretty quickly. Uh, what do you think about the the the lead time, the battle of the press releases, the idea of going out and getting a couple big headline deals done and then using that to either raise money against those or even just have them pay you upfront and then that kind of finances the growth. How are you thinking about like the the investor relations aspect of the business?
Yeah, I mean I come from an investor background. Uh, we did actually announce the the series A this morning as well.
We got a gong for you. Hit the gong for us.
Give us the details. Give us the details. details.
It was Riot Ventures. Uh they they were our lead. We closed it back in September. Fantastic. Kind of under the radar. Yeah. It was Riot Ventures, Space VC, Wonder Ventures, and then Sunflower Capital. And we also brought on Pat Gellinger.
Oh, yeah. As an investor as well,
of Intel. Hit that gong. Congratulations.
Yeah. Gong brings out the technique industry. It's good. Good technique. Talk to me about the name. It feels like you'd be a space company. Uh you're not. Is it is the Is there something more about to the Carmen line or is that the reference? What's
Yes, Theodore Von Garman, famous physicist. He did kind of coin the term of the the difference between our atmosphere and space. Uh but just his work in supersonic and hypersonic flow, okay, was a lot of uh the inspiration for this.
That's amazing. That's amazing. Uh what's next? You're hiring. Uh how big is the team? Where do you want to be in a year? or how are you thinking about growing the business?
Yeah, we're 23 people right now. Uh we have the core team in place to build these first units, get them up and running. The scaling piece will be all technicians to be able to actually manufacture a huge amount of these systems.
Technicians, what what what uh where are you pulling from? Where are the like where do people do this work typically? What skills? Is it a trade school? Do they have college degrees? Are they mechanical engineers? Like who are these people?
Yeah. So we that the kind of first wave of them is going to be people with a I would say very diverse skill set within manufacturing. Okay. And that's because when you're building these first units, you don't actually know exactly where the sticking points are going to be and you're doing this kind of design for manufacturing while you're also standing up the manufacturing piece of it.
So you have to be able to troubleshoot as you're building something out.
Yeah. You want very highskilled generalist technicians. And then we will move into more specialized as we go increase the velocity of that the actual production side of
Are you in the Gundo?
We are actually in Long Beach.
Long Beach.
Okay.
Bigger facilities down the
bigger facilities.
Yeah. Yeah. I mean if you need a lot of space uh it feels like a lot of companies have graduate. They they go to they hang out.
How big are the actual systems?
Yep. They're the size of a shipping container.
Okay.
Shipping. So we got to build a hundred of those to do a gigawatt in a year.
Okay.
So it's very doable.
Yeah. And and and would you actually stack them up a hundred of them at a at a big uh data center site?
Yeah, absolutely. I mean, you go look at these data centers today and you do an aerial view of one of them
and you have more square footage getting taken up by the exterior mechanical yard all for cooling than you do the data center.
Yeah.
So with a one or two megawatt one, if you're building a gigawatt data center, you've got 500 to a,000.
Wait, is that the current status? Correct. Okay. Because I would assume that there'd be some sort of uh mechanical economy of scale where if instead of you know you have a thousand tubes and a thousand you know like uh compressors and a thousand and if you can just reduce that by just doing one really big one but there must be some
but then there's on-site construction with that. It's a custom build all these were really designed originally for hospitals universities doing cold water systems. You'd put two or three of them on the roof. They work great. These are con controls carrier train type systems out there
at the at the 1 megawatt level.
Correct. Yeah.
So you're scaling that up but still staying in a deployable form factor that can fit on the back of an 18heer.
Yep. And that's what CO2 is great for. Got it.
High density. Uh we run a high-speed turbo compressor inside there that was designed in house. That's how we get the 10 megawatts in there. Got it. And then the actual fan system is separated from ours. And that's actually we've decoupled that from our design.
And you're you're building a thousand of these this year.
100.
100. Okay. Okay. And what what's the rough uh what's the rough price [snorts] range for one of these things? You can give me give me a wide I know it's call me but like a wide wide range.
Yeah. You're you're kind of in that we'll call it 750k a megawatt type range. Got it. Very nice.
So it scales up very fast.
Yeah. What what uh what are the like environmental considerations of uh compressed CO2? people worry about emissions but uh what's the net impact of using CO2?
So CO2 is actually the gold standard for refrigerants.
Okay,
because today they're using R123 4A something from Honeywell. They while they have tried to push down the global warming potential of those
they now form forever chemicals over time.
Okay.
So CO2 is a global warming potential of one as it is your baseline.
It does not form forever chemicals over time as well. I
Let's go.
Yeah. Exactly. So, [laughter] it's
we're going to get we want to get CO2 cannons in here.
There we go. Yeah. No, it's actually a great working fluid for many reasons on that. It's nonflammable. Uh and then the energy density of it is just dramatically higher than the existing solutions out there as well as it's abundant. It's cheap.
Yeah. Interesting. Uh and so walk me through like a a data center construction in I don't know 2028 or something. Is it possible that they're that they're off water entirely? There's no water cooling or would that still be a piece of the puzzle?
Yep. So, our system while you use water as in like the glycol loops,
yeah,
ours is a closed loop system. Okay.
So, it does not consume water at all. Interesting.
Which is a big
plus for these data centers because we're getting a huge amount of push back.
Yeah.
Gigawatt data center can pull over a half a million gallons an hour to keep it cool.
It's an astonishing amount of water. Interesting. So we've seen a huge amount of reception from the hyperscalers around closed loop, zero water, doesn't form forever chemicals, natural refrigerant. All of these things are speaking exactly how they want to build data centers going forward as well as that compact package is speed to market. At the end of the day, it's how fast can I build this?
How how are you prioritizing your pipeline? Y
it's kind of a it's like there's not I imagine there's some potential like mom and pop.
There's hyperscalers and there's neo straight for the hardcore. go to semi analysis, look at cluster max, call every single person. Right.
Yeah. So, we've definitely been talking to a lot of the big, you know, named people out there as well as the Neoclouds, the ones that are kind of step below that.
I would say for probably every one of these big gigawatt campuses that you hear one of the hyperscalers announcing, there is three to five in that are kind of shadow data centers that may look like an Amazon warehouse and they're colllocators. They're selling that compute to others. Each one has it, I would say, its own unique reasons why they get excited about what we're working on. But yeah, that's actually a very ongoing consideration right now is how do we allocate our 2027 production numbers right now?
Yeah.
Fantastic.
Where are you trying to how many units do you want to produce in 2027?
So, this fall we will stand up that factory with an annual run rate of a gigawatt. So, in 2027, we'll do about a gigawatt worth of actual production.
Cool. That's amazing. Well, congratulations. Thank you so much for coming on down to the TBP and Ultra Dome and great to have you. It's amazing to uh uh this feels like the perfect opportunity for you and a very uh necessary problem to be working on. So absolutely awesome.
Thanks for having us.
Thank you so much. Let me tell you about Railway. Railway simplifies software deployment. Web apps, servers, and databases run in one place with scaling and monitoring and security built in. And I'm also going to tell you about the New York Stock Exchange. Want to change the world? Raise capital at the New York Stock Exchange. Just do it. Stop making excuses.
No more excuses.
No more excuses. And we are ready for our second guest in the TVP Ultradome. Today we have Blake from Brinks in the studio with some physical hardware. We have
Make room. Make room. You want to put it