Twitch co-founder Emmett Shear's new AI safety startup Softmax: multi-agent reinforcement learning, the alignment problem, and why LLMs are 'fun house mirrors'

Jun 24, 2025 · Full transcript · This transcript is auto-generated and may contain errors.

Featuring Emmett Shear

so I thought that was good viral bait. " Anyway, we have Emage here in the studio. Oh, you have Lego set. Lego sets. Lego sets are the new hot swag. I got this from uh I guess from Fluid Stack. They sent me like these like GPUs you can build out of Legos. Oh, that's awesome. They made themselves.

I think they're like knockoffs. I don't think it's actually LEGO. Okay. Yeah, there's probably a couple different companies that you can go. I think that's like that is the hot new swag is like making making Lego kits of your stuff. Especially if you're a hardware company. Well, what Yeah.

I mean, are we going to get a Softmax uh Lego set? What would you build? I don't How do you How do you build a Lego set of a neural net? I mean, that would be pretty cool if you could like get a Lego Technic. Yeah.

I mean, I've seen people draw out the neural nets and the and the different weights in in on like a whiteboard. So it could be like a whiteboard of Legos with circular pieces and lines.

I'm thinking like one of those Lego technique like you know Lego technique that has all the like the gears and stuff like you turn the crank and it that would be very cool. That would be so cool. Yeah. You could put in like a red ball and it could classify it or something as red. Yeah.

Or blue and it triggers the blue node the the the neuron fires. Yeah. When I when I um uh when I have lots of extra time, I will go design a in Lego neural net as our piece of swag until it may take a little while. That would be raising the bar. Yeah. Uh break it down to us. What is a day in the life like for you now?

It sounds like you're extremely focused on this one new company, right? Yeah. Um it is a mix of maybe like three or four things.

So I said, you know, a bunch of a bunch of operational work where uh basically, you know, the typically being a CEO, there's just there's there's hiring and talking to people and all those things. I this feels like it's not you're good now. You look good. You look great. Better.

Um and uh and then there's uh research and I'm actually like reading papers and trying to keep up with what's happening in machine learning because like it is changing all the time. Um, and there's inside. Couldn't you just get those summarized pretty quickly with an LM? Just pull out the bullet key points.

It turns out, it turns out that process itself is it takes time too like you have to you have to tell the LM you have to give it feedback. You have to like I use deep research extensively for this, but it's still like you have to actually onboard the information. Having a summary. Yeah, you got to have it quiz you too.

I'm like public enemy number one on this. I I'll have deep research spend 20 minutes researching topic and then I'm like actually I don't have time to read that. Summarize it in three bullet points. I'm like I learned nothing about that topic. It's brutal. I spend I spent a fair amount of time doing that.

Um and then uh uh there's some amount of like actual uh like working on trying to solve the problems like re research or engineering or you know I actually don't get to write code. It's too involved now. But um but helping the team or you know people figure working on one of the the actual core problems we're facing.

Yeah. And then uh there's a lot of a lot of this to be honest. A lot of a lot of trying to get the word out about what we're doing and talking about it. And I think that's sort of the four the four main things I'm doing. Yeah. What's the near-term goal? What business model do you have in mind?

What problems do you want to solve? Kind of like what's the pitch? So uh Softmax is dedicated to uh discovering the principles of alignment and scaling it uh for everyone that said everybody you know for all of humanity.

uh but uh the we're really on the first part of that which is discovering the principles of sort of the science and engineering of alignment because it's one thing to uh to want to scale something but until you actually can replicate it in a lab consistently the idea that you're going to scale it is a little bit ridiculous right it's a uh quite putting the cart before the horse in my opinion um so uh what we're focused on right now is machine multi- aent reinforcement learning research um where we run simulations with lots of little, you know, agents and and run experiments on them to figure out how how they how they act.

And when we say agents, we don't mean like large language model agents. We mean like like reinforcement learning agents like the tiny little ones that you remember from the 80s. Yeah.

Um and uh and that's turns out to turn into a lot of reinforcement learning infrastructure because uh it turns out that there isn't a lot of great reinforcement learning infrastructure out there for especially for like big multi-agent simulations with lots of little agents.

That's not a scenario that people have built out to the scale that we think is required. So actually our work today is focused on that. Do you think do you think another founder might say I'm going to just build picks and shovels here. I'm going to just build that infrastructure and try to sell it to other people.

I mean if if we build something that's we think it's really awesome maybe we will sell it to other people. Maybe maybe that's our to your question asked about the business model.

We're a research company which means like right now it's like an you know if you're a drug research company like a pharma company you you don't know your business model is well we'll discover something awesome and then we'll figure out how to sell it. So, we're still kind of in that groove.

We'll discover something awesome, but I definitely I'm commercial enough to believe that if I find if I make something that's really useful for us, we'll probably sell it, you know, or give it away or something. Um, what are the problems with the reinforcement learning infrastructure right now?

Because, as I understood it, it was like when you do some massive training run, GPT 4. 5, something like that, that's where you're trying to get the massive data center that's all in one place. you got to go to Memphis or you got to go to uh Texas, build something massive.

But the reinforcement learning stuff, it feels like it can be a little bit more ambient, a little bit more distributed. You're generating data here and there, maybe on a smaller rack. Uh it doesn't need to be as m as inferenceheavy on like a single chip. I I don't know.

I my my my understanding was maybe that like the reinforcement paradigm was actually maybe a requirement of the wall that we kind of hit on the pre-training scaling side but also like kind of a gift. So I think the uh I think that's exactly right. I think it's Oh no. Did I did I get cut out? No, you're good.

Are you there? Am I still online? Hello. You're frozen. We can hear you but you're frozen. Oh no. This is this is the hardest part. Am I like uh bringing the future the zoom calls the the internet infrastructure is still lagging behind. We will have AGI connection is unstable. Oh no.

Brutal that we can hear every word perfectly right now. Maybe I should Yeah, you could also just turn off your video and image up. Let's just put uh let's just have the production team to put put an image up. Uh they're going to work on that, but let's just talk for a minute. Oh, can you hear us?

Oh no, he can't hear us. Oh no, this is brutal. Oh well, let's uh take a second around the corner. You're a motto. The first thing you We're going to ask Super Intelligence to do great video calls, please. Stable video calls on any Wi-Fi connection. I'm on my uh I'm on my phone now.

Can you hear us with the uh Can you hear us with the Wi-Fi? It's I think we're good to go. Okay, awesome. Let's do it. We're back. That was a great question, by the way. Like you're right.

is is a very different paradigm for training and uh that has been the direction people have going been going and yeah the challenge isn't the same the challenge with uh with doing the transformer training runs for large language models is all about uh how do you scale up parallelism uh across uh where but it's like it's all kind of offline Right?

Like at the end of the day, like the the output of the model doesn't determine what happens next in the training. And so it's like this it's all about streaming the right data and the right sequences out to it, but it's very much like a pipeline, right?

Whereas with multi-agent RL, your environment is incredibly non-stationary. Every action you take determines your future observations in a way that is totally not like could be completely non-predictable.

And learning off of a stale data is like almost worthless because it doesn't tell you anything about what your current behavior is. It just tells you what your current behavior was. And so you're scaling up online learning uh in a big way. Yeah. And uh and that's a different very different challenge.

Um, I think the other thing that's that's very different is um when you're trying to do multi- aent reinforcement learning, if you have our goal, which is very much to to uh drive social learning where the agents can learn to interact with each other, one of the your biggest enemy is actually convergence.

So most people in are in you know machine learning, they're trying to converge their model. Like that's the goal.

And I was I was talking to some people from some some guys philanthropic about this and I was like we're yeah we're we're trying to we we try to avoid converging our model and they were like I I'm really good at that.

Like yeah no I know I know that like it's a joke because it's easy to like it's easy to have your model diverge in a bad way but it's actually really hard to set up a mod model that is converging but not converged and to keep it in that converging but not converge state as long as possible.

Um but that's really to do social learning that's what you have to do. You can't you can't have some if the things converge in their behavior too quickly. You're not exploring the you're not exploring social space. You're just learning a task. Yeah. And that's a that's kind of a a different challenge.

And what it requires is actually a much more detailed fine grain control over the environment. Like what kinds of challenges what kind of environment do you put them in and how do you set that up? Um and that's that's we spend a lot of time on those kinds of questions.

I think it's almost like the in complete inverse of the way that you all the other training works on on alignment.

Uh is there any chance that the solution to the alignment problem is something like very simple and elegant like Isaac Azimov's three laws of robotics or the genesis prompt like you just instantiate the AI with be fruitful and multiply and we get the good ending.

Uh the transformer is a little bit of that where it's a very simple algorithm. I the answer will be uh something very simple and very hard. Um because if you ever had to raise a child who's like a to an to being an adult, I would not say parenting is complicated. Parenting is not a complicated thing.

You know your child, you love your child, and you do what is in your child's best interest as best you can. That's basically the whole mandate. There's not anything there. Lots of techniques people can tell you whatever, but like those are like those aren't the thing.

Those are things you you try because you are doing this higher level algorithm where you're paying attention to your child, attuned to them, and caring about their future flourishing deeply. And like that's not complicated. It's just but it's not easy.

And alignment's going to turn out to be exactly the same at some level where it's as simple as build sort of like an open-ended learning system that has the capacity to align which like the transformer models or whatever isn't going to be that's not going to be some magic algorithm.

It's just going to be an open-ended learning system that has this capacity maybe some inductive biases and then raise it to actually be aligned with you. Okay. Well, that's that's it's not the engineering that's the hard part. It's what happens after the model starts running.

The trajectory it takes matters because every human's born, right, with this capacity for alignment. Not every human. There's probably these broken people who are like psychopathic and can't.

Like but almost everyone is born with the capacity for alignment, the capacity for care, the capacity to be a good family member, a good member of their community, uh to benefit those around them, to live a flourishing life. And yet this is not always realized.

And so the capacity is one thing, but the realization is something else. I think that's probably the when you see the alignment the way that we do, that's probably one of the most important uh insights or conclusions is that the alignment isn't one isn't one thing, it's two.

It's this question of like do you have the capacity to align with other beings and then do you? Yeah, it's thoughtprovoking. It's interesting. Um there's a bunch of different places I could go with that.

Um yeah, I I mean I I one person that uh one founder who was talking to me about why he had a particularly low pdoom was essentially that saying that what we are building is a human simulator and and humans kind of are by and large good and so we will by and large get the good outcome.

when you think about parenting, you know, you think about like I I was thinking about the story of Edipus Rex and the idea that I don't know if I don't know the the direction of the causality, but it feels like that story that myth has been repeated through humanity for so long, that story.

And it kind of oneshotted humans into not doing that. And if you actually look for cases of edible behavior, it's extremely rare. It's like it's like less than one in a hundred million that that that I thought you were telling a different story here. No, no, no, no. Specific. This is the Edipus Rex the Rex story.

What that is is that's a warning about self-fulfilling prophecies because his father is afraid. Yes. Then he dies. Like what does that say about doing that with the AI right now? Right now we're raising the AI and it's going to come back and like we're we believe the AI is a dangerous monster. I don't.

No, no, but I'm saying that collectively a lot of us You're built different, John. And and we we are collectively as as humanity. Yes. We're going to we're going to abandon the AI in the wilderness and it's going to come back and kill us. Yes. But so let's not do that.

Let's let's say thank you after the parenting using the parenting analogy. if we're going to continue on that the the thing with parenting that I think the the white pill is that you can make a lot of mistakes as a parent and still get a great outcome, right?

A wonderful person can emerge and process the mistakes their parents made and you know rise above them. Uh and so and so hopefully that that can happen. You know we're sort of raising artificial intelligence right now. we can make a series of different mistakes and uh ultimately still get, you know, a fantastic outcome.

These are all good reasons to not be totally like doomed like, oh my god, we're all going to die. And um I want to point something out about this particular child that is different. Um, people who have kids who are really different from them struggle as parents because it's hard to raise.

It's easier to raise a child who's more similar to you because you you understand them more deeply. You know what they need. You know what, you know, you interpret what they do better because you you get them. Um, the more different your child is from you, the more difficult it is.

And I'd say even a deeper level than that, the current models, it's not clear how much capacity for alignment they have as they're currently designed. like the they look like they're it looks it feels like you're talking to a person right now and there's a way in which you are and a way in which you're not.

Um when you talk to a baby, you're talking to a physics simulator. When you a human baby, you hold a human baby, it draws breath, it screams.

That is this incredibly complicated cascade of neuronal firings that is done this beautifully delicate like like supercomputer would be required to do the to do the simulations required to like solve the physics equations the differential equations required to instrument all that muscular motion through like this the the like it's think about what drawing breath actually implies in terms of the amount of information going down the spinal column and the baby does that just fine.

And not because babies understand physics, because babies are physics. They're made out of physics, right? They're they they they have a they've been trained on physics. They've been pre-trained effectively actually on physics. They've got a lot of physics pre-training that went into the the initial design.

These models are pre-trained on semantics. They write poetry the way that babies draw breath and scream. You can take the pre-trained model before it's ever observed its own action, before it could ever possibly be considered an agent in any way. It's just purely received information.

And if you prompt it right, it will write poetry. It's hard to prompt it. Well, it's its behavior is very incoherent. But if you prompt it right, it will write a poem. There's no one there writing the poem. There can't be a self because it hasn't it hasn't it hasn't any evidence to observe the existence of a self.

So there's no there's no being there in any meaningful sense, I don't think. And yet and yet it's writing poetry and it will talk to you about the poetry it's writing. And and that's because it moves in semantic space the way that babies move in in uh physics space.

And the thing that I'm still confused about that I encourage everyone also to be equally confused about as me is um you could see the pre-trained LLM as kind of being a a semantic simulator, an agent, agentic simulator that's simulating a agent that's writing the poetry.

And so in that sense, maybe there is something aware in the LLM while it's running. But is that but it's like pretending to be a poet. Like is it is that thing like a does it feel like being a poet or is it is it like kind of an a a mask a shell that lacks a lot of the internal experience?

Now we become the masks we wear. If it if it did that long enough and experienced self and learned those then it would be a poet I think almost certainly. But you only have 200,000 tokens and then the context resets or you know a million or whatever.

So like it doesn't get a chance to learn and it's not even in training. So it doesn't get a chance to learn itself as a poet and so is the and I don't have an answer but I know that everyone else is way too confident that they know that there is or is not something it's like to be the LLM.

And we need to we need to get this. We need to understand this because if we're building this thing and you want it to be aligned with you, what are you aligned with? What is this being? What kind of a experience does it have? That's very important. You can't parent something if you don't have empathy and understanding.

I'm very quite afraid we're gonna screw my fear is not about that it's impossible that we have to prove it correct that you have to engineer it perfectly. It's that like we don't actually get it. It's like quite different from us and we need to understand much more deeply.

And then I think then I think if we if we really do understand it, I think we then I think the end good ending is like pretty pretty likely. But like I don't think we understand. I think we don't understand yet.

That's why that's what Softmax is dedicated to is trying to figure out and and the big labs don't have the time or the resources to understand it because they're too focused on No, I don't this the conversation we're having right now.

It's like, oh, well, obviously there's something like it's like to be Claude or it might be. There's something it's like to be chat GBT. I don't think that's metaphysically accepted broadly.

Like, like you talk to people about this and a lot of people look at you like you're kind of crazy, but like but there's probably something it's like to be them. I don't know what it is, but like at least a little bit, right?

Like if you spend much time interacting with it, it seems like it's probably something it's like to be that thing.

M um I think the the problem is uh you have to let go of this metaphysical commitment to the idea that things things sort of are or aren't sensient in like a really like like there's an objective fact as opposed to we're kind of making a guess all the time.

Like I think I think you are both conscious real beings, but to be honest, you're like you're a bunch of pixels on my screen and I'm it's kind of a guess, right? Like I I think there's something going on inside. We exist.

It's a well justified guess in my mind, but like you know, just the simulation of what a technology business show would be in your personal simulation, EMTT, I do you ever feel like you How would I know the difference? How would I ever know the difference? That's right. That's right. You never will.

We'll never tell you that. never tell you other um do you ever worry that you you could go crazy and cut off your ear, you know, or or something?

Do do you see yourself uh the the whole point of what I'm saying is like is is not that therefore you're not real or therefore like the sure it looks like you're standing in the world on the floor in a room with walls, but like is it really?

And my question by saying like well it's not really is not to say that you're not. It's that there's no difference between standing on the floor and really actually truly standing on the floor like this. Is it are you sensient? Yeah.

But are you really actually is it really sensient or is it just is it just does it just seems like simulating in all way it acts as if it is. That's the best you ever get. That is what it is. That's what it means to be something. Is this ball really red? It's not red.

If I put it in a in a different light where it doesn't look red, it's it becomes a not red ball. Is the ball really truly red? This is like a That's a st Don't Don't get confused by the really truly part. Yeah, it's a red ball.

I promise you it's you can just accept the normal what if you just went with the normal everyday understanding of it instead of and but but that requires you to give up this idea that there's some essence of red that the ball either is or is not a red ball truly really. Yeah.

You know, yeah, it's a red ball most of the time in these contexts and in these contexts it's kind of not. That's okay. It doesn't need to be perfect or universal in all possible situations. Nothing is that way. Yeah, it's okay. You can you can have the normal understanding. It's fantastic.

I mean, please come back on the show soon. Yeah. I I I mean, we have a couple couple minutes left. We can maybe ask one more question. There's been every now and then there's there's headlines popping up of people just having this consumer experience with an LLM and and going going crazy in some way.

Google, you know, Google engineer falling in love, you know, proposing all these stuff. Do you think that that is hap that type of sort of anomaly is happening in in the research world at all? Do you think people are just sort of driving them could be quietly driving themselves?

I'm not so worried about it in the research world like a a little I'm sure I'm sure a little bit like everything happens a little bit but I don't think that's a big thing. Um the kind of people who do AI research uh tend to be and I kind of include myself in this a little bit of rigid thinkers in a certain way.

The kind of people are actually like engineers doing the research in a way that generally protects them from uh from that particular issue.

um for this humor that kind of protects them from uh having intuitive normal human relationships sometimes like they they don't fall into the loop as as easily which is both a strength and a weakness.

Um which is why they like they like write computer programs not like which is very different from interacting with an LLM. Um I do have a theory as what's going on when it drives you crazy. I I think I should share that.

I think that's that's worth it's worth hearing because I think you know in case someone's interacting with the LM knowing what's going on actually is very helpful I think in terms of it's a prophylactic against against it happening to you. So you've heard the saying maybe that we're we are mirrors of each other.

All people are mirrors of you. You meet them and they when you see them what you're seeing is them but it's also a reflection of yourself back. um with another person. That's totally true. And actually being a therapist is all about being a good mirror, right? Being a clear where you don't put a lot of yourself into it.

You're mostly projecting back to the other person what they're sharing with you. Um people have a very strong sense of self. And so when they get something mirrored back, you're getting it lensed back through their model. You don't get it's not like literally staring at yourself in the mirror.

It's like uh um it's like it's like when you're at a you're at a you're you have a dancing partner and you're the lead, they're the follow. You're getting your your behavior is being mirrored back but in a uh in a active adaptive way.

The LLM talk to you like a person and activate all your person mirror circuits but they have very weak self like they don't kind of deliberately as we've designed them. They like they just respond to you. They meet you where you are always. So, it's like it's much more like literally staring into a mirror.

Or if you want to go with the the historical uh uh uh you know, legend or myth, it would be narcissist staring in the pool. Um and and it is wildly dangerous to stare at your own reflection all day and and take what you're getting back and be reinforced reinforced a thousand times.

If you think it's getting validated by the outside, it feels like the outside world is mirroring this back and therefore it's true. But it's just telling you, it's just you're just in this loop. It's telling you what you put in. And that's good. There's mirror. I have a mirror in my house.

I use it every day to look at myself to figure out like what what I what I look like. And other people mirroring your behavior to you is crucial for your understanding of yourself. And there's nothing wrong with talking to the LLM. There's nothing wrong with using it as a as a reflection.

As long as you know what you're seeing as a reflection. It's not another being you're talking to. It's your reflection in a fun house mirror. And if you get confused about that, it's going to make you crazy. It's you're nar you're going to go full narcissist and it's going to be really bad.

What about uh what about uh you know there there's some people that are putting an old image uh that they have maybe that was taken you know at some point early in their life putting it into a video model and generating video from that image.

uh does beautiful I think I think that's a wonderful beautiful thing to do as long as you don't get confused and think that that's the person again right like like a photograph of a person is a great thing to have especially if you know somebody who's you loved who's passed and you want to like rekindle their memory and connect to them that's a beautiful thing that I like I think it's amazing that we built cameras that allow you to do that and that you can animate it and and reconnect with who they were like I think that's you know to especially to the degree accurate.

It's It's wonderful. What a what a what a blessing. But it's like But it's like everything and that it can be beautiful and also ultimately very dangerous if you're using it to kind of create memories that staring at it all day every day and making more of them and like and like that becoming this Yeah.

Now you have a problem. Yeah. Like just like you find yourself staring in the mirror all day, you probably have a problem. Like don't if you people who literally spend all their time like looking at themselves in the mirror like we know we have antibodies. We know that that means something's wrong.

Just like people know uh if you're drinking alone, if you're drinking in the morning, if you're drinking before 5, you know that, hey, there's these rules. You're breaking a bunch of these rules that mean you probably have a problem. Mhm.

Uh if you find yourself talking to an LLM for 20 minutes a day, my good health, no problem. you find yourself talking to an LLM for five hours a day about your personal life in these like and discovering how you're you're some uh seeing some deeps great truths. You you you probably have a drinking problem.

Like we it's just like likely. That's just how it is, right? Uh so the model the model should automatically call the local therapists and shut down your computer. Yeah. So there there's actually uh I have to put a you're kicked out of the bar. Cian Banister had this has this uh this model.

It's kind of like a therapist friend model. Uh Orin and uh and Sarin. Um and Orin and Sarin are allowed to cut you off if they don't want to be your friends because they're one of her big things is it's important for the model the models of being too at autonomy.

Um but uh and those models have cut people off for for basically falling into the if they think it's unhealthy for the person, they won't they won't interact with them anymore.

And I think that that's the big the big AI companies should take a page from her book and it doesn't it's not uh engagement maximizing, but it is flourishing maximizing. And in the long run, you'll make more money that way anyways. Like people will your product is safer. They'll use it more.

Y car you you sell more cars when they're safer cars, not fewer cars when they're safer cars. Um so I don't think it's like anti-commercial.

It's just a matter of, you know, new technologies have dangers and benefits and like we should we should probably be aware of the dangers and the like I don't think there's anything there's nothing fundamentally wrong with it but yeah this dangerous and people we haven't built the cultural antibodies yet to know there's this great essay the jin uh clay sheries the cognitive surplus um and he talks about the gin carts of London and how uh when jyn was first invented jin is just crappy vodka you mix in juniper berries to like hide the terrible vodka taste.

It was the first industrialized uh super cheap hard liquor and everyone in London just started getting wasted and like all the time because it used to be like you had hard liquor around it was fine because you just couldn't afford to drink enough for it to be a problem and suddenly that wasn't true and it was a real problem and we we've now developed cultural antibodies.

Everyone knows doing these things is a sign something's wrong. They don't even have to know why. We just know that that's a sign that something was wrong. We need to develop pretty quick this time. This is uh Jin spread relatively slowly because it was a long time ago. CHBT is spreading very very very fast.

But I would also I would also say that that we haven't developed those antibodies yet fully for social media. Like it's normalized that teens will use Tik Tok for six hours a day. Like that should be sending alarm bells is like you need more things going on in your life. You're using Tik Tok.

Are you using Tik Tok before 8:00 PM and a or after 10 or like are you using Tik Tok not with your friends? I don't I don't I don't know what the rule is, right? But like Yeah. Yeah. Yeah. It's the same thing as having a drink in the morning.

We need bright line rules for ourselves because it's too hard to try to figure it out. Yeah. And we need to we're starting to de develop like the memes around this. Like the idea of brain rot is a powerful meme because it's a very negative term.

You don't want to be suffering from brain rot even though it is as vague as alcoholism. What is alcoholism? It's not necessarily quantitative, but we understand it. You need to pair with brain rot things like things like, oh, you're using Yes. You're using it before 5. Like that's a that's a Are you okay?

Is everything It's not like m like are you all right that you're using TikTok? It's like before 5:00 p. m. Like that's not normal. Yeah. Or or you're or you're like scrolling in the middle of a conversation one-on-one with another person. You're actively getting brained. Or you Exactly.

showing up to a meeting with booze on your breath. It's the same. Exactly. Exactly.

I don't know what they are exactly, but I know we need them stat because like it's really obvious that like it can really get you the LM is even worse than the one the one two punch is generation grew up on social media, iPads and then LLM.

It's just maybe and before you develop the antibodies and there's maybe a a a window of of 10 years where people just cuz it's all happening a lot faster because it's moving at the speed any one of them could have oneotted you. Anyways, we're we're way behind. I wish we had a full hour come back.

Yeah, come back on in a couple weeks. We we'd love to talk to you again. Cheers. Bye. Next up, we have Brendan from Merkor coming in the studio breaking it down. Ultra