Luma AI raises $900M Series C at $4B+ valuation and partners with HUMAIN to build 2-gigawatt compute cluster in Saudi Arabia
Nov 19, 2025 · Full transcript · This transcript is auto-generated and may contain errors.
Featuring Amit Jain
Um, our next guest is already in the reream waiting room. We have a hard stop at two. We got to run. We got to hop on with New York. So, let's bring in a meet from Luma AI with some massive news. How are you doing? It's been too long. Great to see you again. Welcome to the show.
What's happening?
Give us the news. What happened today? What happened? Break it down for us. Yeah. So, um, we did two massive things. One, Luma raised a a 900 million series C.
Okay. What was the second?
Come on. I'm I'm sorry. I'm sorry. What? Like, what [laughter] was there not another 100 million lying around? You couldn't like You couldn't You got to You're going to make us wait for the the Luma1 billion dollar round. Come on, dude.
I'm very happy to accept friends and family checks.
Okay. Oh, yeah. If you got 100 million and for from your friends and family to round out that'd be fantastic. And then what yeah what is the second thing
and yeah the second thing is basically along with humane which is a a you know this AI company being built in Saudi Arabia.
We are building a 2 gawatt compute cluster
that we're going to use to train uh you know multimodal AGI.
This is the big news. The the big news this is much bigger. This is much more important. This is what we actually need the compute. So, so you know the tier of what happened here is basically you know so far LLMs and LLM labs have had the right resources and multimodality world simulation these problems actually you know were side projects for for most companies now there is a lab and there is a company in the world uh that has this level of resources and is going right after AGI that can help us in the physical world AGI that can help us simulate and and and and you know uh generate the universe. So I think uh that's actually what happened basically.
Amazing. Uh so
how do you think about h how how do you explain the scale of 2 gawatt because it sounds like two is not a big number
and and and is that is it 2 gawatt because you're expecting 2 gawatts worth of inference or do you need a particularly big cluster for some sort of pre-training run that you're planning on doing?
So it's both inference and training. Um but inference is actually so you know majority of the workloads as we go forward right like you know as AI deployment uh uh goes forward as we mature from just texton models to models that are able to like you know generate videos models that are able to explain things to us in video what's going to happen is most of the workload and tokens will will move to video understanding and video generation and video tends to be you know computationally much more intense than than than language. So we need this level of compute to be able to deploy this technology and to be able to train. Uh but this is mostly inference honestly even today Luma's inference to training compute ratio is 2 is to1 already uh and and we're seeing that ramp actually growing further and further and further while we we do deep research and and train some some of the largest models in our space inference is the one that is actually taking off. Okay, react to this uh this uh take I got from someone who's also building a world model, a generative world model. Uh he told me that he believes that uh it's more likely that AGI something fully uh paradigm shifting emerges from world simulation than merely scaling up next token prediction GPT 54 567 89. uh getting away from text is actually somehow foundationally important to um the next major breakthrough in AI as we know it as a whole.
I think getting away from text is a mistake.
Uh [clears throat] we need to build models that combine audio, video, language and image. So like you know we need to build things that like operate like human brain. If you remove text, you remove the entire interpretation of of the the human logic and and like you know reasoning and those kind of things. So we need the physics that comes from video. We need the causality that comes from video. And we need the text which which actually makes all of this interpretable and logically connected you know across the world. So no I I think what we need to do is build these joint unified models. But uh on the simulation side I agree and I think that's really really important because think about robots or think about systems you know uh how they would operate right like they need to be able to understand the world. So this is world understanding which is where world models are going to be very very powerful and multimodel models are going to be very powerful and second is simulation being able to run the the process or idea in your head and and and drawing out conclusions right you know what if I go 20 m this way uh would I fall right this is a simple question but as robots become more general purpose and in day-to-day in our lives we need this level of simulation capability in their heads so generative models give you simulation capability right simulation is extremely important second thing is LLM s are really good at things that can be represented more or less fully in text code analysis these kind of things. But when we think of the physical world especially acts like designing uh you know manufacturing these kind of uh topics like one of the things we think a lot about at Luma is is manufacturing of a jet engine right or manufacturing of a rocket engine. These one of the most complex things humans do and it takes a decade to build one. Imagine having models that are able to run these physical simulations and get to an answer. It's not the about the visuals, it's about getting to the right answer. People do that in CAD, people do that in in like, you know, software today. But it's like very uh uh um inaccurate. But if you're able to build models that can accelerate building of these complex systems, humanity has a chance at like you know uh uh building better and better things for ourselves for for our planet. So that is why simulation is really important and that's why multimodality is really important. And text is just the first step. text is like you know 1990s internet then we got images on the internet then we got videos on the internet and today like you know videos is the internet for humans at least
um AI will not be any different
yeah um last question from my side uh what is the actual timeline for building a 2 gawatt cluster
yeah where where where will the majority of the infrastructure be
when can I see it when can I go inside I can be trusted
so some of it already exists so by the way we are building this uh in partnership with humane in Saudi Arabia Yep.
And uh today it was announced here. So we in DC right now uh for for the um US Saudi investment forum.
Oh no.
And it was announced uh by President Trump and and Crown Prince uh Mohammed bin Salman. Um so the data center is going to be built in Saudi Arabia. Uh quite a lot of capacity is actually already available and Luma is actually an active customer and using that today. But the deployment of 2 gawatt is going to take time. That's an absolutely colossal amount of power and infrastructure that needs to be built starting with 2026 and and currently we believe that like you know by by uh end of 27 or early 28 we will have majority of the capacity at hand and and uh like you know we'll go from there.
Fantastic. Uh well thank you so much time while you're traveling to come chat with us and break down what's going on. Uh congratulations on uh the amazing news and uh good luck with the next phase. I'm sure there's a lot going on.
Next time you call in, come call in from uh from Saudi.
That'd be amazing. [laughter] We'd love from the desert. That'd be amazing.
The first time actually I was on TBPN, I was in Saudi.
Oh, no way.
There we go. We already did it. [laughter]
Well, check that box.
Next time next time I want I want, you know, one of those 4x4s that uh you know, call call in from the desert from from Humane. I want I want I want to be live on the ground with you.
Amazing. Great to see you again. Congrats on the progress. Thank you so much for jumping on.
Uh we have to hop on with New York. But first we have one post we got to pull up.
One post
and it's a post that I made.
You made
right when I saw that beat earnings they have traded up. The stock is up 3.8 3.91%. Uh massive massive it is at the very continuous they were signed. This is your prediction. uh one of your one of your many predictions, but
this is all the only the only data.
This is the only data you need to know. You know, you know, uh you said this. I think he's going to beat earnings because he's drinking beers. And uh and Ev was like, "Yeah, you belong in a pod shop." And he was saying it like sarcastically. Like, you know, to be in a real hedge fund pod shop, like you have to be much more quantitative than that. Turns out you don't. Turns out the vibe analysis works. Take it all in. Absolutely. Uh, thank you to everyone for tuning in and watching our show. Leave us five stars on Apple Podcast and Spotify and we will see you tomorrow.
Global economy continues.
Continues. The party continues, folks. White suits tomorrow.
Gabe in the chat. Gabe getting drunk. Drunk responsibly.