Dylan Patel breaks down OpenAI's compute ceiling, China's robotics dominance, and AMD's uphill battle
Jun 6, 2025 · Full transcript · This transcript is auto-generated and may contain errors.
Featuring Dylan Patel
Uh, we actually do have a bit of a live audience. We have four people in the studio today. Uh, but we also have a soundboard going, a giant gong. If you want to drop a big If you have any big numbers to throw out, we'll hit this gong. Why don't you just hit it, John? Dylan's been on a generational run.
Just give us a big number. Hit it for semi analysis. There we go. Got the Gogg cam. The go cam. It's so loud. I'm sorry. Uh, what's happening? Great to meet you. It's it's it's fantastic. I'm having a fantastic day. That's awesome. I I really enjoyed your China Talk episode, the the update to the AI lab tier list.
Uh, I wanted to I asked I asked some folks about that. Um, I have I have some follow-up questions, so maybe that would be a good place to start. Um, uh, OpenAI, I mean, it sounds like they're they're they're not backing down from bigger and bigger training runs.
Uh, but can you give me a little bit more insight into what it means to do a big training run now because they're still pre-training like the RL stuff is scaling up? Like what is the shape of the next big run, if that's even the right term to use in this era? Yeah.
So there's there's efficiency gains on multiple layers, right? So pre-training uh you're not you're not simply just going to scale it larger and larger uh as your only avenue of spending resources, but you are constantly getting efficiency gains, right?
Something that lets your model be 10% more efficient to run or train uh but but involves changing the algorithm. So, people are still doing pre-training constantly, but they're not, you know, OpenAI's uh Orion or GPD4.
5 was a run so large that they literally can't scale it up any larger until like end of this year when Stargate starts being operational, right? They literally just don't have a cluster that's bigger. Um, so, so it's primarily them evolving their uh model architecture, continuing to get efficiency gains, right?
Uh so for example, GPT 4. 1 was an efficiency gain over 40 and they're going to do another one of these sort of uh efficiency gain step up in model size somewhat but not as far as 4. 5. So that's on the pre-training side but all the eyes are on the reinforcement learning side, right?
That's where they're doing all these crazy things or cool things, right? Whether it be, you know, 03 specifically for uh just regular uh reasoning and 04 is coming out soon enough. um or it's deep research or it's uh multi- aent systems.
There's tons of work they're doing spending training uh compute um on on training larger and larger models uh in terms of more compute spent on RL, not necessarily model size, right? Because your model size is baked in at pre-training. Yeah.
Uh w with Stargate coming online is like is there really no other way around that? Like I feel like AWS is really big. There's like other big clusters out there. Can they just not get access to those or they do they truly not exist and will Stargate be like first of its kind?
You know, I think it's it's funny that my little like autistic obsession became the most political thing in the world, right?
Um and so like you know, you have you have you know obviously US, China, now Middle East, like there's all these geopolitics here, but but of course like the politics are between the companies too, right? So there's the whole like OpenAI, Microsoft breakup, right? There's a reason Stargate is not in Wisconsin, right?
Microsoft had a huge data center they're building in Wisconsin. That's where it was going to be. Uh but now Stargate is in, you know, Texas and it's not being done by Microsoft because OpenAI and Microsoft had all these politics, right?
A lot of which were really well reported but a lot of which have been sort of silent. Um but you know and it's like okay well is Amazon going to give OpenAI compute? Absolutely not. Right? Is is like Google going to give OpenAI compute? No way. Right?
U you know it's like it's like there's companies getting bought by OpenAI. the deal hasn't even closed and they're cutting and cutting off access, right? You know, sort of with the whole wind surf thing.
So, it's like, you know, it's like these things are so political and and and and little nerdy autistic boys don't know what to do with politics. Yeah. Yeah. How much of that dynamic is driven by just like true AGI pilling at the top level of the hyperscalers versus just this is potentially a multi- trillion dollar?
Yeah. This is potentially multi-t trillion dollar outcome and like we need to have a player. we need to cut off a competitor. Um, what is there a dynamic there? I mean, I don't think those things are mutually exclusive, right?
Like whether you're like truly agi pill like open aanthropic type people where you know you want for the benefit of humanity or you're like you're like Andy Jasse or like like Satcha Adella or you know whoever it is, right?
You're you're you're uh you're uh you know, whoever you are at the hyperscalers, you know, at the end of the day it's profit is AGI, right? If you have AGI, you hope you could profit off of it, right? The the the people at the labs would like be like, "Well, I don't know if AGI would let us profit off of it.
" Um, but but in the meantime, as we're building towards AGI, that is a, you know, trillion dollar, 10 trillion dollar, multi- trillion dollar thing that you're you you have under your control. Yeah. Do you think we'll be able to tax the profits of AGI even if we can't reap them?
Because I was looking at the Trump Elon breakup and I was wondering, what does this say about their AGI timelines? Because if you really pill lion does not concern himself with deficits. Yeah. If you believe AGI 2027 or AI 2027, like the budget deficit doesn't matter, right?
Like we're going to be printing 20% GDP growth, we'll be able to pay back trillions of interest in just a couple years. Um, so I I think it's I think it's quite funny.
Um, you know, this this opening, sorry, this Elon and Trump sort of drama like you know who's you know who's like sitting there like, "Yes, yes, I know who Sam is. Yes, let's go.
Cuz he's like, you know, like for a long time Elon was like, I'm not going to allow OpenAI to become a nonprofit or become a for-profit, right? And he's suing them, right?
And and part of his like I'm sure part of his calculus for like going all in on Trump and you know, saying he loves him more than uh a man can love any other man like you know, you know, four months ago was like him thinking he could, you know, convince Trump and the IRS to not let them convert. Right?
Now, like it's like very obvious like what's the impediment like they're going to get going to be able to convert. At the same time, Trump was always a Stargate guy. He's a big Stargate guy. He announced the project, right? Trump is Trump is a big number guy, right?
You get the hundred billion and he's like, "Hell yes, hell yeah. " Because this is this is what it's about, right? Yeah. Yeah. Wait, uh before we move on from the OpenAI uh the next big run, uh how real is the narrative of like they're going to pull ideas from Deepseek?
The I think it was like FP8 was one of the innovations. There were a bunch of other innovations. Uh some of those were specific to the restrictions of chips, but it felt like some of them might be generalizable to just more efficient training runs.
Are there real learnings for the American labs that aren't GPU poor, GPU restricted from DeepSeek? Um, so I think I think there's certain things that uh, DeepSeek did that the rest of the industry had already done in terms of like the OpenAIs, right? FP8 training was one of them, right?
Open AAI has been doing FPA training since at least 2023 um, perhaps earlier, right? Now, on the flip side, there are certain things that like uh, Deepseek did that were a bit beyond what the labs had done, right? And so, one of those things is like how sparse a model is, right?
um you know you have a mixture of experts model u you have you have so many experts in the model but you don't activate all of them for every forward pass of the user right um and so uh GPD4 was 16 experts two of them active right so that's a 1:8 ratio um GPD4 ended up being like a 1:32 ratio but DeepSeek actually went even further they were 1 to 64 uh I wouldn't say that that's something that like the labs weren't planning to do but definitely Deepseek did leaprog a little bit right went a little bit further uh as far as publicly available models and I know I know uh Google's models aren't as sparse as um as as open ais right and so everyone is headed that direction but deepseek definitely went a little bit further so I think it's like I think some of the more interesting stuff that deepseek did was around sort of like you know making it open what the reinforcement learning verifiable reward paradigm is making it open um how how to do really efficient inference systems right um these are the sort of things that I think deepseek did that a lot of people can learn from um and also there their their there are special uh attention mechanisms.
There's some interesting work there. Um even as far back as midl last year when Deepseek released a paper, everyone was like, "Wow, this Deep Seek company has really good uh research, right? " So, it's I think I think there's things people can learn, but also there's a lot that like the labs had already done.
Uh they just weren't like maybe they didn't go as far or they did it, but it wasn't like they were passing on all the cost savings to the user, right? Because they like to have margins. Yeah.
How much of the deepseek narrative around like it happened because China banned highfrequency trading is real and should we ban high frequency trading in America if we want to pull AGI timelines forward? Th this is this is great.
Um I think like like some of the most cracked people I know at Anthropic and OpenAI are ex like quants, right? uh like Enthropic's best performance engineer who like writes all of the like per like you know improves the performance of their whether it's tranium or TPU or GPUs. He's he's from Jane Street, right?
And it's like they're missionaries. The there's still mercenaries over in the high frequency trading bucket with a few laws we could move those mercenaries over. I'm not saying we should do it. I'm just say I want to know your take. You there? Yeah.
I think I think um I think I think that the um with with you know with the mercenaries right it's like well the labs can just pay more money right and they do they're luring over the people they want with these crazy um salaries I think we're kind of going back and forth on the uh can you hear us okay we good yeah I can hear you okay perfect um uh let's uh maybe let's move on to Google.
Um, uh, what was your reaction to, uh, the Google IO news? And, um, and I mean, you put them in a tier. Um, they have this, uh, incredible V3 model. How much of that is due to the cornered resource of YouTube? Is that a durable advantage?
We kind of saw the open web get scraped by every single LLM uh, foundation model company. We saw all the code in GitHub kind of get exfiltrated one way or another. uh is is that something that is going to be a sustainable advantage for them at least in video generation?
Um so so to some extent it's like everyone trains on YouTube right um like every company takes makes you takes YouTube into makes it into transcripts trains on it or even takes the videos and trains on it right every video model company is ripping stuff from YouTube uh this is like very well understood and known but you know there's there's limitations on how much you can steal from YouTube right I think my favorite sort of clip of all time one of my favorite clips of all time is like when someone's like hey Mera did open I train on YouTube and she's just like if you don't You know the bee.
You know the beep. Yeah, there's definitely some data that got out, but but but there's the scale of data like with YouTube, it feels like just in if you were to tokenize it and then compare it to GitHub's tokens like you're looking at orders of magnitude and that seems important in terms of just right.
I I don't think you can just rip all of YouTube like that, right? Like they have they have protections against that, right? Sure, you can get engineer ways around like pulling some videos, but like YouTube gets 500 hours of video uploaded every is it like every minute or something? It's insane. Yeah.
And I mean just the storage costs alone. Like you would be able to see the storage center for the copied YouTube like from space. Well, speaking of scraping, I'm curious. So, so Reddit and Anthropic are in a lawsuit right now around Enthropic scraping Reddit. Uh I was surprised that they didn't have a deal in place.
Do you think there was, you know, meanwhile Gemini has a deal, OpenAI has a deal with Reddit. Do you have any insight into why Anthropic would, you know, not even make something happen when clearly it's sort of a valuable and important platform to to train on. Um, I mean like like the data is the boat, right?
And if if you can get away with using the getting the data without paying for it, then you're going to do that, right? Um, and I think like, you know, I don't know about the legal ease, but like isn't Open Eyes deal with Reddit like hund00 million like a year or something insane? Um, Gemini is like 60.
Yeah, they're like they're both around 100 million a year and that makes up about 20% of Reddit's overall revenue. But of course, that revenue is super high margin. 70 million 70 million a year. Yeah. From from both. Google is 60. Yeah. Okay. Right.
So, so like if if that's the case, like what is what does that like lend you, right? Like $70 million a year is is a ton of either, you know, it's it's like 70 researchers, right? Or it's like it's like thousands of GPUs, right? Right.
So, it's like, you know, which would you rather, you know, like yolo and and just not pay for the data? Uh and now like Anthropic can probably just like, you know, fight it out in court for a couple years before they finally pay Reddit, right? And then are they going to have to pay as much as Google and and OpenAI?
Well, that would like seem unfair, right? So, like they're kind of going to get away with like having not spent the money. So, maybe that's a good good sort of uh view. Obviously, there's also the view of like, well, it's not actually like stealing data because it's like training on it, you know, whatever. Yeah. Yeah.
Yeah. Totally. That's interesting. Um on XAI, you put them in B tier. Uh there's a world where Elon being on the outs with Trump draws more scrutiny to some of his companies like Tesla and uh and SpaceX. XAI feels like completely off of the political agenda in Washington.
So maybe this is a bullcase for XAI because they have Elon has more time to focus on it if he's not spending less time in Washington. Uh what's your view that I have a fun anecdote? Right.
So, um I have a buddy who works in like manufacturing optimization uh at Tesla of like specifically how they like press the steel parts and like you know make make the body, right? Um and he's like, "Dude, I haven't had a meeting with Elon in like 9 months. I'm right? " Like I just like what am I going to do?
Like he's going to come in. So So the thing is like Elon's amazing because he comes in and he's like, "Why are you doing it this way? " Yeah. Why? Why? Why? Do it this way. Yeah. Yeah. And then and then you have to like either do it that way or come up with a really freaking good reason why not.
And he's just such a first principles thinker. But now but but for like 9 months it was like well I'm going to continue to drive down the path that I think is best. Yeah. Ideally you want that rapid iteration, right?
It's like you want him to come and just like get angry at you every day instead of once every nine months. It's actually better. I mean I mean like it depends, right?
like maybe it's not every day, maybe it's not every week, but like there is a level of like he comes in, he resets all your priors and you like reset your road map kind of.
Um, and like the flip side is like, you know, for the for the employees, it's like, well, I might now have to go back to working like 100 hours a week, whereas like when he was gone, I could work like 40 to 60, right? And it was great. Um, so so so I know Tesla definitely had a lot of neglect.
Um, uh, I know Neurolink had a little bit. um XAI uh some but XAI still had like quite a bit of interaction from Elon, but obviously that's just going to grow more. Um and so I think I think it's probably a good thing that these companies have Elon coming back and devoting all his time and effort.
Uh the flip side is that like XAI does have like a government like facing like sales um as does anthropic as does open AI and now is it like XAI is going to lose out on those sort of government sales but at the end of the day government sales are like pretty small relative to private industry where where do you expect uh most of XAI's revenue to to come from a year from today?
That's that's a great question.
Um I don't think it's consumer right because probably most of their revenue today is like the Twitter subscription right u I guess because XAI is X now right so like the categories that matter it's like consumer developers and then traditional enterprise right is there like people don't seem to be plugging gro into cursor or windsurf and we haven't seen people vend Grock but it's just so early that maybe they could win in certain you need you anti-woke receipt heat processing or something like Yeah.
So, it's just at some at some point, you know, there there's going to need to be a lot of revenue on the XAI side. It can't, you know, X, you know, X the everything app can do a lot of work. Maybe they add payments and things like that. I'm curious.
I I'm curious if you think that um the merger is actually net good for X the social media platform in the long run because my con my concern is that it's actually X the social side of the product that will really suffer over the next few years. But I'm curious how what your read is.
Um I think I think um it was clear before the acquisition that X had a lot of debt um and they had they didn't they didn't generate enough profit to pay it off.
Uh so their EBIT was like bad or not EBIT sorry just their earnings period obviously uh before interest in taxes it was fine but they just had so much interest payments um and so that's why that's that's why I think XAI acquired it um it is beneficial right XAI does get access to a data source that no one else has um and like like like the fastest data source out there right X is way faster than any other data source out there so I think that's that's hugely beneficial um but as Think about like what is the highest value ways of organizing the world's information and acting upon them and it's like today everyone's sort of highest revenue thing is is code right cursor just hit 500 million ARR right um and and you know you know all these other windsurf just got bought for this crazy amount of money right like is revenue even even copilot at github at at Microsoft and github apparently isn't doing 500 million a year in run rate yeah but like like code is like where all the money is being made today um and the question is Like I think I think all of the labs sort of like at least open AAI enthropic uh especially anthropic believe that like code agents software agents is where the most money and most progress is going to be made over this next 6 months.
Um I think XAI also believes this to a large extent but they also believe some other things right um but but you know I think I think automating businesses and helping them become more efficient is going to be where most of the money is made not consumer.
So yeah, I mean I I guess you kind I can maybe guess your answer, but do you think Anthropic needs a dance partner or any dance partners because you could see them going out and getting Pinterest or Snapchat?
I don't know if that would be valuable, but uh they're kind of the one LLM that doesn't have a huge consumer front door to their product. And maybe that doesn't matter on the revenue side, but maybe it matters on the data side. Yeah.
So I think um I think they're just too AGI pilled to think consumer matters at all, right? Like they don't care about voice modality, right? Like they just don't think it matters. And it's like voice matters a lot, right? Um at least for like consumer applications, right?
They don't they don't care about like image generation, right? So it's like do they need a do they need a dance partner? Uh there's a really good Mexican song called Asia by Solo. Uh and so like I don't know if you know him, but it's got a freaking trumpet and everything. It's such a good song.
Um, and it's like, no, they're going to they're going to stay they're going to dance alone. They're going to dance alone. Are uh staying in the Elonverse, are you worried that cheap robotic arms will take the jobs of humanoid robots? Um, so so I think like colloquially like everyone just talks a lot about humanoids.
Um, humanoid robotics. Uh, but it's like most of the applications that I think are automatable in the next 2 3 years do not require legs, right? Like if I go to a warehouse, if I go to a data center, if I go to a factory, the floor is a flat slab of concrete. Yep. Right.
I do not need to spend all this weight and energy and power on legs when I could just throw wheels on it, dude. Like come on. Um and so like that's that's like one side of it. And the other side is like human manipulation.
Um it's it's actually funny like AIs are going to be way better at us than like software than they are going to be at like just manipulating, right? like picking things up, right? Like it's like it's like hands are freaking incredible.
And no, we we aren't even close to, you know, human level hands on um on on a physical basis, let alone the AI that we have in our brain that's, you know, developed over thousands and millions of years versus, you know, the the sort of language capabilities which have only evolved over the last 5,000, right?
Um, so I think I think um it's going to be hard for humanoid robots to be like take off immediately, but there's tons of applications where um you know humanoid robots will still be able to do basic things, things that are um capable like low low risk of getting it wrong so you can try again.
Uh, and then the other aspect is, you know, what about what about like I cut the cost of a humanoid robot in one/10enth because I have wheels and so I cut weight down dramatically and it's way more efficient and instead of having fully formed hands, I just have like three gripper fingers, right?
And that's all I need for like, you know, data center automation. I just got to plug in things and take them out, right? Unscrewing things. Maybe my arm is just a screw and then the other one is like a three-finger, right? And it's like this is way way cheaper and specialized to the task than a humanoid.
Uh, not that I'm like against humanoids, but as far as like, you know, robot arms, yes, that's part of it, right? Like a lot of tasks, it could be a stationary robot arm. Um, but the important thing to consider here is that literally all of this will be made in China and none of it will be made in America. Why is that?
Uh, the supply chain for Chinese robotics is soundboard. We're very pro from America here. Yeah, the audience is very America. Um, it's sad, but it's like, you know, the supply chain here is like China's China makes more robots.
You know, if you go back 5 years ago, China made as many robots as roughly Germany, uh, South Korea, Japan, and the US, right now, China makes more robots than all of them combined, right? Um, and the cost that they do that at is way lower.
And like, well, the Japanese companies argue that they're that the Chinese robots are Yeah, but like you speedrun up like quality faster than you speedrun down on cost.
Y um and like the same applies to like Germany, although Germany sold some of their best robotics companies to China and all the tech got transferred, but like you know it's like we don't have the manufacturing supply chains to turn, you know, raw goods into motors or into actuators.
Um and if we can, you know, or or when we do, it's like it costs 10x as much. Yeah. And so it's just like it's it's impossible at least in like some huge industrial policy for the US to compete here.
So I mean it seems like there's almost an analogy there to the semiconductor industry where China has been lagging there for decades but has done a lot to at least catch up to the lagging edge.
Are there any policies that that or investments or industrial policy that China did to at least stay in the game to the degree that they did that we should be stealing from them and doing here for robots and manufacturing 10 five year plans back to back 10 five year plans.
I think the funny thing is that when you look at industrial policy uh the way China does it sometimes is more capitalistic than the way we do it.
Now obviously like you know giving a bunch of money to an industry is not capitalistic but the like the exact policy mechanism of the chips act versus what China does is like so much better right so like China does like oh you just get like x subsidy for manufacturing a certain amount right like oh it's a company by company deal and this company gets this money for this plant right um or or like hey we're just going to subsidize the land in general for this or hey if you build a fab that is 28 nanometer you don't have taxes for 10 Right.
And so stuff like this is what China's done and that's had them like dramatically increase their uh semiconductor spend. You have things like the National Railway Company is now making power semiconductors, right?
Because because they make profit and they're like, "Well, if I make fabs, then I can blow all my profits from the uh the the railway in that and then all my all my fab profits will just be profits without tax, right?
" And so now one of their largest railway company is like the third largest power chip company in China and like top 10 in the world and and over the next few years they're going to be top five period right uh across the whole world.
I think like things like this are like interesting industrial policy and probably we can get like the car companies you know and uh various you know like John Deere and like all these like automation like these mechanized things uh to get them to get start pushing into robotics right cuz like BYD is pushing into robotics and like obviously everyone knows about unit but like you know Xiaomi is making cars and getting into robotics right like all these companies are getting into robotics in China because of industrial policy Um and and they did the same with autos, right?
EVs. They went into EVs because of industrial policy or they went into semiconductors because of industrial policy. Like America doesn't have that, right? Like we just have maybe you know we have the startups who are mostly just buying Chinese equipment and repackaging it and pretending like they make it here.
Like figure um or um you know like well we also we have we have American VCs who put American flags in their offices. That's got to count for something, you know. I mean I I'm not saying it's not noble. It's just like I'm joking. Um uh on on unitry, what do you think the perception of unitry is in China?
Because if you're if you're a believer in the wheels and the different actuators, it feels like they could be more like China's Boston Dynamics where it's like, oh, bunch of cool videos, but not a lot of actual applications. But the reaction in America to unitry videos is, oh my god, there's this crazy wave coming.
Um, but but maybe I love that every video of them is just like doing war things. Like, do you really want to scare the West? Like, totally. Why don't you just have it like petting bunnies? Like, you know, trust us, the Chinese robots are not dangerous. They should be doing that. Yeah.
I was wondering like I I fully expect that there will be one of those like North Korea style military marches with a hundred like a 100 thousand humanoids and it will just be terrifying even if the capabilities aren't actually there for real war fighting just the demonstration.
Did you see the uh humanoid uh marathon in China? Yeah. But but anyway, so to answer your question, right, um um Uni Tree, yes, we see the like the humanoids, the the robot dogs, but they've actually released a robot dog, but with wheels. And so it's just like two, it's like sticks and it has wheels.
And they're releasing a humanoid with wheels actually. Um they just showed it off at this uh conference uh in Singapore that one of the guys on my team was at.
Um and so like um they're they're coming with the mobile manipulators, but also you have to recognize like for the cost that they make in $10,000, Western companies make in like $75,000 or $100,000, right? So like the quality of hardware they can make. So that's also like an important consideration. Yeah.
Um you said you said something earlier around all the you know the next value in uh models coming from automating businesses. is the right framework for that agents.
Is that how you how how someone should conceptualize it or is it an entirely new paradigm um that that's less about you know agent human kind of collaboration and more so just like full autonomy. Yeah.
I mean I mean uh the the dream is always multi- aent systems right many different agents working together and just doing the job.
And then uh but today you know the time horizon for human model interaction has been growing rapidly right initially it was like seconds right um it's getting to minutes or if you use deep research like 30 plus minutes right um and and and this is going to start extending to many places like code u you know if you use uh cloud code you know the the time interaction is like minutes not not second uh not seconds um and so I think um it's it's and it can even extend to tens of minutes I think I think these agent systems will it'll be it won't be like a hard like hit, right?
Like everyone's expecting sort of like oh we have chat bots now I have agents but really it's going to be like a blended curve of like yeah you know like just like the amount of interaction the human has with the system get becomes less and less and less over time. That's a good framework.
Uh, sorry to move back to China, but uh, Alibaba, Quen, um, what, uh, I I I mean, it seems like they're just pumping out like a million different open source models. What's the reaction been to the American research labs? Have we actually learned anything from that?
Um, uh, we've had some folks on the show who've been just generally excited to play with so many different research tools, but, um, but are they trying to commercialize that? And I guess my big question is like what is the actual chatbt of China right now? Like what is the what is the consumer front end?
So so on the on the latter point the consumer front end is is is DeepSeek offered through the clouds right? So some of the cloud companies like Bot Tensen etc offer uh DeepSeek uh the model but through their own platform rather than through DeepS um and some of them will offer it through DeepSseek themselves directly.
Uh but the interesting thing to note here is that we sort of have a very big dichotomy here.
Part of the reason deepseek costs so much less on inference than you know US models is because the speed of deepseek's models right and there's this curve right of like how how fast does the model respond to you versus how many users you can batch together right so if I have like and it's a system and I'm only serving one user it's going to be bl screaming fast right but if I'm serving hundreds of users each individual user is going to be slower now the total collective number of tokens I'm generating is much higher but this is interesting because when I use open AAI I I get 100 tokens per second, I get 200 tokens per second, right?
Same with Google, same with Enthropic, XAI, etc. , right? Like you type the query and it goes like maybe twice as fast as you can read, so you can skim it, right? Uh but if you do that to DeepSeek, it's like 20 tokens a second, right? And I can read faster than the model outputs.
And that's a conscious decision because they're limited on compute on hey, I want to serve as many users as possible. And so the chatbot style applications are actually very like bad.
And that's why that's why Chinese uh people tend to use like open router a lot to access like open anthropic models through uh in China even though they're not supposed to or you know accessing you know other companies uh models like through other methods.
So like I wouldn't say that there's truly a one consumer front uh front end. Rather it's it's that the as soon as DeepS came out the the government started pushing all the companies to offer DeepSeek. Uh but there there still isn't a good consumer uh experience right in China.
Moonshot is probably the most uh popular one. Uh but their models aren't that good, right? So it's like there I don't think it's one out yet. Yeah.
How much of an how much of an issue is uh owning the application layer if you're an u an independent country that doesn't want to rely on American or Chinese foundation models like what we've seen with Mistral and L chat. There's this world where you spend all this money to build this data center in your country.
You spend all this money to train a model. Maybe you're not at the frontier but you're you're close. You're in the game but everyone just by default in your country just goes to chat. com.
And so unless you're willing to ban chatbt and you're not going that far, like you haven't truly won in terms of deploying your model with your view on free speech into your populace. Yeah.
So I think I think that's that's going to be challenging for like there are there are sovereign AI efforts in all these countries, but like you mentioned right uh Lech is not like that successful in France.
Um you know there and and this and and and Mistral is the closest of any like sovereign AI besides China and the US.
So like you know you're going to get all these models but people are just going to use the consumer like you know it's like am I going to use like like like Google models are so popular in India or like perplexity is like ridiculously popular in Indonesia for some reason.
Um like I don't know why but like these are just like frontier models instead of like from American companies rather than like um the the domestic model that could could be made. Um and so I don't expect any sovereign AI efforts. So so uh that to like truly succeed on the model level.
I do think that like you know sovereign can mean many things. It could mean I'm making a bunch of data centers. It could be mean I'm making a big neocloud company or it could mean like I'm going to make the full stack with the model.
But at any point if it doesn't get economical, you can always just like rent out like most of the cost of the GPUs. You can just rent them out to someone else, right? And then like try and recuperate your cost.
But you can't really do that with a foundation model that you've trained that has like mediocre weights, right? So that is kind of right.
It's like it's like you just you burned all this money and flops on something that's effectively useless because Meta's open source model and Deepseek's open source model and Alibaba's open source models crush you. Yeah. Yeah. Uh speaking of Meta, uh I'm try I've been trying to work through what's going on there.
There's obviously kind of the dust up around Llama 4 launches, but uh somebody asked me like what is the bull case? And everyone always says like, oh, recruiting or you know, some beneficial thing or benevolent thing.
But I was wondering if there's a reasonable case to be made that if you try and just project out what Meta's LLM spend would have been on other vendors o in five years, it would have been so big that even if they're constantly on the lagging edge, they're still recouping the investment in training llama just because they will be able to serve it internally.
Is that a reasonable uh like like thesis or is there something else going on there? Like what what do you think of like the the motivations and and problems with that project? I mean they have like internal llama models too that are trained on their internal codebase that they can use internally, right?
So that sort of stuff is like something they could just never get elsewhere. Um but I think it's I think it's a a multiffactor thing, right? Like uh the bullcase is obviously a personalized AI assistant on my glasses. Um, and that's just always on my head, always serving me up slop.
Um, but like so truly winning in the consumer use case like like right like truly the real Yeah, that's that seems to be what Meta is actually going for.
Um, but as far as like you know today they roll out WhatsApp uh Instagram and Facebook models uh meta llama models to everyone and they claim they have 500 million monthly active users right which is a lot right now. Imagine if they didn't offer that, right?
Would people start uh using chat GBT instead and now they're out of that a meta ecosystem and their screen time goes down and they get look at less ads or does like Apple intelligence uh become more powerful and that people go to Apple intelligence for their AI because that's the that's the AI that's most accessible um and now all of a sudden Meta who already has been screwed by Apple multiple times on advertising is now getting screwed even harder because all of the like like more and more of the data keeps going to Apple and there's no way for Meta to get access to it.
It's like there's an existential threat here too, right? Um, and so I don't think I think it's like prudent that Meta spends on these models. I'm also like a lot more bullish on Meta longterm than like most people are.
Um, people are like looking at Llama 4 as like a single point in time of like, "Wow, this really sucks. " Uh, but there's like two models here, right? There's like Maverick and their Scout. Um, one of them is really bad and one of them is pretty good, right? What about Behemoth?
Isn't that another Don't they have a third rough name? They didn't release it. name, but uh yeah. Yeah, it is a weird name cuz it's like very demonic and it's like you're unleashing this demon. But well, the funny thing is the architecture is super similar to GPT4. Oh, really?
But like Yeah, it's it's like extremely similar. It was just like a very funny naming scheme because like the behemoth is like the untameable beast and like they couldn't tame it to get it out the door and so like the name really mimics like what what happened with that project.
But um so yeah, I mean what are are are they uh gated by uh scale?
Are there I mean the capex expense seems to be growing but similarly to all the other hyperscalers uh is there anything that's unique about uh meta over the next couple of years that could kind of give them a compounding advantage there and and allow them to kind of come from behind?
I I do think they are still spending less on genai than other hyperscalers.
um note that like 60 70% of their GPU spend is on recommendation systems and like standard business and like nowadays that means like creating genai ads and those have more click-through rate like that's the that's the holy grail for the next 6 months for them.
Um so it's not like there there's no Genai in there but like a lot of it is on their classical business. Um but generally like they are a player to be reckoned with, right? Like Open has Stargate. Uh Meta has their Louisiana project. I don't know if there's a fancy name for it.
Um, and like everyone everyone has like their big projects and so when we get to 26 27 everyone's going to have these gigawatt scale data centers with, you know, close to a million chips, right? Um, and and the game continues to fight on.
I don't think that like this is a temporary setback for Meta, but they continue to recruit, they continue to have um, you know, good talent. Um, they'll be reorganized, right? Like without pain, you don't like get better, right? So this is this is a learning experience for them.
So uh on meta and and scale they partnered with constellation on this sort of uh basically agreeing to purchase nuclear energy from constellation for the next 20 years. Uh what what's your updated take on nuclear?
We obviously have these new EOS to help support it yet it still feels pretty far out at least for bringing new energy online. How how are you thinking about uh the category? Yeah. So like even even like in general building new nuclear reactors is still going to take a really long time.
Uh and if you're a believer in AI27 or AI 2032 uh whatever the number is uh that's too long of a time scale to matter, right? Like you know this is this is still in the deficits can be largest possible time frame. Um I think I think uh with regards to like nuclear that's the case for standard u you know gen 4 reactors.
Uh but when it comes to like SMRs, uh they didn't ease the most important regulation, right, which is that like the concrete containment zone, you know, this containment zone for the SMRs still has to be like monumentally big, right? Even though it's like a much smaller, much safer thing.
Um and so, uh they didn't ease that regulation. So, I'm not that bullish on SMRs yet either. Um and and basically like yes you can sign these deals for nuclear power but that's nuclear power that was already being used by residential and now those residential or like industrial whatever it was just turns to gas.
Or on the flip side right if you look in like Louisiana Meta's data center uses gas right or if you look at Stargate it uses gas. Or if you look at uh Memphis, Tennessee XAI, it uses gas, right? Like you know, you look at um you look at Amazon's Indiana facility, project rainer, it uses gas, right?
Like everyone just uses gas and like that's are the ESG commitments that the hyperscalers made in kind of the prior era uh like informing any of like are are they acting as like handcuffs or shackles or like leg weights going into this next energy buildout?
So, so the funny thing is some of them have backed off a little bit like Meta and Microsoft have backed off some. But the other thing you can do is just like pretend like you're still green. Um, and so what you do is you do this thing called a PPA, right? Which is a power purchasing agreement.
So you can set up like solar panels like 100 miles away and they'll generate power only during the day, but that power generated during the day is enough to cover the entire night of the data center as well. But the actual electrons being consumed are not from the solar panels, right?
The solar panels are just selling it to the grid, but you purchased it and sold it to the grid. Whereas the power that you're buying from the grid is like coming from a gas plant. The actual electrons you're consuming are from there.
So then you're pretending to be green because you have this solar PPA backing it up or you have this wind PPA backing it up. But in terms of actual electrons being consumed and new plants being built, it's gas. Yeah.
Uh let's talk about I think this has their their their regulatory ESG stuff is like malleable enough that this is allowed. They're working through it. Um let's talk about some of the other sci-fi concepts for energy generation. Uh people are starting to talk about uh data centers in space.
Uh people are talking about data centers in the middle of the ocean with tidal energy. Basically go where the energy is basically free or you get something for free. Maybe you get cooling for free or land for free. Uh is any of that relevant to what you've been uh focused on?
Um, so Microsoft's tried putting a data center in in in the water before uh for cooling, right? Um, and that didn't that didn't work. Um, and it's the same reason why space data centers won't work anytime soon. Okay. Um, and as far as title energy, tidal energy is like interesting.
I think it could work, but like, you know, it's a bit of ways. Um, the main thing on why data centers don't work, whether they're you're sticking them underwater or you're sticking them on a barge in the middle of the ocean or you're sticking them in space, is that GPUs are really really unreliable. Right?
So we we we have we've done a bunch of like testing of GPUs. Uh we we reviewed 40 different clouds up to you know thousands of GPUs in fact in some cases but hundreds usually um or and and many time it was called cluster max right where we tested like 40 different clouds including like Amazon Google etc.
Everyone's GPUs are unreliable because they're inherently unreliable. And what happens when a GPU is unreliable? Sometimes it's as simple as, oh, turn off the server, turn it back on, right? Sometimes it's unplug an optical fiber, you know, a transceiver and plug it back in, right?
Um, sometimes it's like take the GPU out of the socket, put it back in. This is like the vast majority of fixes. I don't know why you need to do this, but like that's that's how tech works, right? You unplug it, you plug it back in. And and and like you need people to do that, right?
Um, yes, robots could potentially do it, but we're not there yet. um the space of a data center like the physical size of it isn't that large. Um like they're big in an individual basis, but like they're not even covering like a basis point of American uh land even in like a decade, right? It's like they're very small.
So So like GPUs are really unreliable, which we've sort of proven with like cluster max. Um and then the other aspect of space data centers that's really dumb is like how does heat get removed, right? Heat gets removed because like you have atoms contact the heat.
they, you know, whatever's hot, they take the heat away and they're more excited as they leap, right? You're you're taking cold stuff, you're making it hotter, and you're sending it away, whether it's water or um air, right?
But at the end of the day, you're radiating that heat um out of an evaporation tower or chillers or whatever into the general air around you uh on land. But in space, there's no medium for you to like like how many particles are hitting your spacecraft and carrying away heat.
And also those particles that are hitting your spacecraft are generally quite excited anyways. Um, but there's just like very few of them. And so like there's no way to radiate heat away. And these things consume tons of power. So like they're unreliable. They consume tons of power. Power is cheaper in space, right?
Solar panels, you can you can have them like constantly having sun, but like it's like you know what about the fact that like you got to radiate the heat away.
um like people people try and put like tiny little p uh computer chips in space and satellites and they have a hard time cooling those right so it's it's a it's a problem that I don't think is like feasible something that's feasible in the next sort of within the deficit spending is okay time frame keep going back to that uh are you bullish at all uh are you bullish at all on new consumer hardware you know dedicated hardware for AI you know Uh Johnny IV joining uh friend.
com. You might be calling in from a Rabbit R1 and we don't even know. Uh are are you excited at all about kind of a third device outside of the phone or the the computer? Yeah, I think um like the Johnny IV Sam Alman video is so cinematic. It's so good.
I'm excited for the product simply because the video is so good, the quality of it. Um, but like like I don't know like there's I've seen I've seen that like this Apple device that has like a um it's it's basically an AirPod with a camera, right? Um and like that that's pretty cool.
Um or like the AR glasses, right, with the cameras on them like the Meta Ray bands. Um or you know all the other iterations of that. I I think that a new device is definitely coming.
um with how good AI is getting, it means that the form of human computer interface can change um and it can be more natural and more seamless, right? It's not going to be typing on a computer in front of a on a desk. It's not going to be, you know, it's you're still going to have that.
You're still going to have your phone, right? But really, I think like this like hands-free compute device is coming and and and it's enabled by AI. Now, will Meta win? Will Apple win? Will OpenAI and Johnny IV win? Um, yeah, I like I like Friend. The founder is really cool. Will they be a runner at all? Right.
Like um it's it's it's a hard space. Um but I think like there's three companies that like seem to have a shot and it's like Apple, Meta, and maybe OpenAI Johnny IV. Yeah. And Avi of course coming from behind. The domain is just we got to give it up for the domain. The domain is appreciate it.
They spend a fortune is worth every penny. Uh on Apple uh you put them in L tier. Uh, if Warren Buffett can learn to love Apple at age 75, can we AGI pill Tim Cook at age 65 or however old he is, is it possible? What needs to change for Apple to become dominant? They have so many advantages.
They have tons of lines in humor. It's the AI that's made me laugh more than any other. That's true. The the the that the text summaries that go crazily wrong and hallucinate weird things. Very entertaining product. Seriously underrated as a consumer product. But what's your experience?
uh break down what's happening at Apple, what changes you predict, what changes you would recommend. So, I think um they're clearly not, you know, that AJI pled. Uh they haven't spent a ton on data centers and things like that.
Now, they do have like hundreds of megawatts going online in the next two years versus like the hyperscalers have gigawatts, right? So, it's like, you know, it's like, okay, they're still an order magnitude off in like how much they, you know, want to fume.
Um, and then like the flip side is that Apple has started building an accelerator. They hired some of the best people from Google. Um, they already had a good chip team anyways cuz their chips for their phones and their Macs were good. But it's like it's like getting the talent they didn't have.
Uh, they're partnering with certain companies and they're like they hired some of the best people from Google. Uh, you know, including their like head of rack design, right? Like uh rack architecture. So, it's it's Apple's definitely um keeping far enough behind that at any point they could leap into action.
Uh but the challenge is always going to be like how are you going to get people to join this company? And so this is sort of like my bullcase for like or not my bullcase but like a possible scenario for like thinking machines is that they just get acquired by Apple, right?
Like because at some point Apple will realize, oh god, we need talent and it's like there's no talent out there for us to hire. when we try and hire, they're like, "No, screw this. " I mean, and Tim Tim barely makes 70. He doesn't even make 75 million. He makes 74.
6 million, which is like a junior OpenAI researcher if they joined two years ago. So, it's like, how can they compete? Uh, that's OpenAI's janitor, bro. Yeah. Uh, is Lisa Sue cooking? She acquired Bream earlier this week. C. Can you break down that acquisition? Uh, was it exciting to you? What's going on over there?
Yeah. Yeah. So, so they actually acquired uh two companies. Okay. Um this week they they acquired Brium u which which is more like LLVM compiler people. Um I think it's more of like an aqua hire. It's like 10 people or something like that.
I think uh it's not that crazy, but it's it's it's more commitment that they care about acquiring software talent. Yeah. And what's funny is when they acquire aqua hire a company, the people at that company are getting paid a fair salary, right? You know, like a like a very good salary.
whereas like sometimes their hiring process they don't pay people enough which is why they don't always get the best talent um which is something that we've sort of uh mentioned but yeah so Bri's like an ML compiler and things like that and then untether is another one which was like making an AI chip um and they aqua hired them as well u mostly for the the the staff on the hardware side I imagine uh because they just need to move faster uh to be able to uh release chips as fast as Nvidia is because Nvidia is just like clockwork spinning out new chips uh they're trying to go to that yearly cadence and such right So, I think I think Lisa Sue is is waking up to, you know, how important this opportunity is and that she needs to spend a lot more.
Um, but note they're still spending a ton of money on um buybacks um when they could be spending. I mean, so does Nvidia, but like Nvidia is also making like, you know, godly amounts of money. AMD is making good money and then they could turn around and reinvest that, but they're not reinvesting as much as they should.
Um I think you know these are good aqua hires both untether and uh brim which are like both aqua hires. They also bought like nodai right a while ago. Um nodai is where AMD's sort of like god of GPU AI software is an he came from there.
Um like he's the king and and that acquisition was really good because they have like again like really good software talent that they were able to pick up. Um it's just you know we need them to move faster if we want to take them like you know super seriously. Um but the right direction.
What do what was your read on the NVIDIA earnings and uh kind of the lack of guidance around the training to inference switch over? Do you have a better idea of what's going on? It seemed like they had quoted a 40% number earlier. They didn't include a new number.
Uh but this seems like an important narrative or maybe it's not around uh the actual workload shift. And then I want to know if that's like relevant to AMD at all. Is there an opportunity there? Yeah. So, I think that Nvidia is going to have a hard time knowing Oh, really?
how much compute is going to inference versus training. It's actually just like hard for them to know, right? They don't get to know what workload their customer is running. They can guess. They have the most data to guess, but like they can guess. Would people not want to tell Nvidia?
I feel like if you're buying, you know, 100 million H200s, you might just be like, "Yeah, sure. I'll tell you what I'm doing with them right now. " Like, no problem. I'll fill out your survey. Here's my NPS. I think I think these companies are very secretive, right?
Like you know people like OpenAI won't tell Nvidia what their model architecture is even though if they just told them what their model architecture is Nvidia could optimize their next generation chip to it way better right and so Nvidia just has to guess right and like hope that they know what openi is doing but they don't right like um they they have a good idea right and so like I think I think that's one aspect of it the other aspect is that the lines between training and inference are blurring right in the days of pre-training your cluster would be training training training and then you deploy inference clusters but Now uh with uh reinforcement learning with verifiable rewards one interesting thing is that you know traffic is not usual right it's like it oscillates um and actually on the weekends chat app chat GPT is used very little relative to weekday um during work and school hours chat GPT usage is huge during night it's not really right except for when like Giblly slop like is like trending that's the only time you have like usage uh you know higher during non- peak hours which is like work day, right?
Um, and so what's interesting is that these clusters have lower utilization, but now I can turn around and start using these clusters at night. For example, I can spin down a percentage of them and I can turn them to reinforcement learning with verifiable rewards.
I can generate a bunch of data, verify it, see if it's good or not. You know, whether it's math problems, coding, or some other reinforcement learning uh paradigm like computer use, I can do it and then I can throw I can I can keep that data.
And if I ever need to switch over to uh back to inference as usage is picking up I just like okay stop you know no more of that here's you know put the normal model on there and start doing inference and that can take like a minute or less right like so it's like there is there is a blurring of lines of what is training versus inference um and then and then the other aspect is like can AMD get in there right um what's interesting is that you know AMD has generally taken this approach of like hey we're like 80% of the cost and we're the same performance, right?
That's that's sort of where MI300 sort of shakes out. Some scenarios it's higher, some scenarios it's lower, but we're like 80% of the cost. Um, now when you look at this new generation of GPUs they're launching, I think next week u Mi355, it's it's two/ird of the cost of Nvidia's Blackwell.
Um, but performance-wise, is it 2/3 or is it is it less, right? Or is it more, right? And so this is going to be a challenge. It's going to be it's going to rely a lot on their software. Uh Meta is really excited about it, but like not too many other customers are. And so the question is like can AMD get in there.
Um they're targeting inference mostly. Um but Nvidia is moving the game really fast, right? They're coming up with these new software libraries that uh don't just like it's not just CUDA, right? It's like they're building the entire inference engine, right? And AMD can't plug into that. It's called Dynamo.
So what Nvidia is doing around Dynamo is really locking AMD out from inference at the non-hyperscalers and labs because it just doesn't make sense for anyone else to try and replicate the software that Nvidia is giving away for free on Nvidia GPUs. Mhm.
What do you hope to see out of companies like Prime Intellect over the next one to two years? Uh I I believe you're uh an investor in Prime Intellect. Is that correct?
But but uh so uh I'm sure a little bit conflicted but uh but yeah talk yeah I mean I invested in the seed but like to be clear like our I'm pretty impartial right like we had this cluster max review and we didn't give them the best review even though I invested in them personally right so it's like you know this is I think but I think what they're doing I'm really excited about right AI is super centralized right um and and this is the only path I see forward mostly for c is like that AI continues to be more centralized there's fewer and fewer players they have more and more control over our data, over automation of the entire world's economy.
You know, this is this is potentially quite scary for someone who likes decentralized power systems and like capitalism and such is that centralization begets, you know, tyranny. Um, and not not that I'm like being that dogmatic, but like their like their technology is mostly a technology reason, right?
They had cool demos around decentralized training. Uh, they've trained models in a decentralized way. They've done reinforcement learning with verifiable rewards now in a decentralized way. I'm very excited about the research they're doing there. And they're open sourcing everything, right?
Uh which is the cool thing, right? You can just go look at what they open source. You don't have to like trust, you know, anything. It's like they they have some pretty good software.
Uh I'm I'm hopeful that there's a path to all AI and power not just continually being centralized until we, you know, are like enslaved by the machine god. Except if except if semi analysis were to centralize it. I feel like you know that rule people should be able to trust you.
You know benevolent sort of dictator world leader. I could I could see that. Call me the leewanu of AI for sure. Um uh I I wanted to go more into uh the those like Studio Giblly moments when when people say quick quick fire off. Would you hire somebody with a Studio Gibli profile picture or is that a negative signal?
No, that's good. in in our Slack, someone has a Studio Giblly profile. Okay. Yeah. Okay. I like it. But and his public his public profiles are also a Studio Giblly. Okay. Wow. That's good. Uh but you know, whenever there's like these big meme moments like they everyone says the GPUs are on fire.
Obviously, they're not literally on fire, but what is breaking? Because is it is it that there's actually more unplugging and plugging it back in in the data center? Like like when when we see Studio Gibble go out and like you know the queries are clearly failing. Uh what's what what is it?
What do you think is happening internally? Is it just too much load? Because I feel like the chips are under incredible load during training, right? So like that can't like they're used full times, but but why are we seeing um so much failure during these like spikes? What like what's actually going on there?
With training, we know what's happening, right? We're saying, hey, in this iteration, you're going to process this many tokens. You're going to update the weights. You're going to exchange them all. Okay, here's more data for you to train on, right?
and you just keep iterating, you know, many many times, you know, every 15 seconds or 30 seconds or 60 seconds, you update the weights and you do it again. Um, and you know how much data you're sending with inference, you know, you have these clusters and it's like it's that curve again, right?
Uh, if I just have one user, it happens really fast, great user experience. If I have tons and tons of users, happens really slow. Um, and while I am serving more users and I'm getting more output, each individual's user experience is worse.
But at some point it gets so bad that you either run out of memory or hey a GPU does break now all of a sudden you're dropping uh requests constantly right so not only does the user experience get bad because it goes from fast to super super slow u right like look at how fast Giblly is processed now right it's like crazy fast compared to what it was initially um but also like when something bad does happen you can't just be like oh take all these requests route them to these other GPUs right it's oh all these queries fail right?
Like, oh, like I got I got more requests than I'm able to handle. Well, these users, I'm just not going to let I'm just not going to start processing their requests. I'm say, uh, sorry, it doesn't work. Right? And so, that's sort of when the GPUs are on fire, that's what happens. Right?
Now, also, um, I think I think it's mostly it's mostly that dynamic of like you add add that incremental user and things break or if something breaks, all those users now have their stuff broken. Well, thank you so much for coming on. Everyone in the audience should go to semianalysis.
com if you're not already subscribed. Uh and uh we would love to talk to you again. This was fantastic conversation. Super fun. A million. A billion. A trillion. I want to go. A billion. Hey, add a add a Yeah, hit the size. Go um add a add a new plan on semi analysis.
Uh 5 million a year and uh we'll we'll ask some of our VC buddies that just have a bit too much. We will shame them into Oh, you're not on the pro plan yet. Yeah. The the max the presidential the presidential plan. You're on semi analysis ultra max. Wow. You must not be serious about artificial intelligence. Oh.
Oh, you're not you don't really invest in AI, but Oh, you're not about the AI thing. You're you're more just more of a tourist. I get it. Yeah. More of a tourist. Yeah. Yeah. You're B2B SAS guy. Yeah. It's cool. It's cool. It's great. I'm sure you make a good living. Sure you make a good living.
But anyway, this is fantastic. We'll talk Thank you guys for having me on. Super fun. We'll talk to you later. soon. Bye. That was a lot of fun. All right. Uh, you're going to love this, John. I think you were too locked in.
We're going to have um Blake from Supersonic calling in at 2:15, 45 minutes to talk about the new EO. Oh. Oh, is it political? You know, I can't you I can't pay attention to politics when I'm talking to an AI person. You know, I just get too too locked in. But that is fantastic news. Very exciting. I see your post.
Blake will be joining TVPN live at 2:15 PM PST to discuss the new Supersonic EO. very excited to talk to him. Uh and uh we have Dana Settle from Greycraft coming in the studio. How are you doing, Dana? Good to see you. Hi, great to see you, too. What's going on? Uh thanks so much for joining. Very exciting times. Uh the