Sequoia's Sonya Huang on AI application layer winners, humanoid robotics timeline, and enterprise AI battles

May 9, 2025 · Full transcript · This transcript is auto-generated and may contain errors.

Featuring Sonya Huang

both the early stage and mid-stage startup market. Obviously, as a venture capitalist, she is investing in a lot of interesting stuff today. So, welcome to the stream, Sonia. So good to meet you. Thanks for having me. You know, Alfred was bummed. His jacket was Hermes and he was sad to lose it.

And I was like, "Don't worry. I think I think you got the better trade here. " Oh, wait. So, that was a permanent swap. He gets to take it home. Permanent swap. It's hanging off the side. Okay. That's fantastic. Yeah. It needs to be framed. I mean, it's iconic. Iconic. Yeah.

I mean, I want to buy a leather jacket just so if I'm ever in the same room, I have something to swap. We'll get you the dates. Don't worry. Fantastic. Uh, yeah. Give me the rundown of AIScent. Is this something that happens every year? Was this year special? Obviously, it's a, you know, a bigger trend than ever.

Uh, who were some of the interesting speakers? Uh, give us the open. Um, well, I'll I'll give you the origin story. I'll take us back a bit. Um, we invested in OpenAI back in 2021.

So, this was back when it was very much, you know, it was a few guys using the API, definitely no chat chat GPT yet, but we just felt like they had invented magic.

And we wanted to make sure that our portfolio companies would be kind of the first to be able to see that, play with it, um, and transform their own businesses. And so we set up a field trip actually for like 40 of our portfolio companies to go visit OpenAI back in May 2022.

And so that was like pre-Chat GPT moment and everyone loved that field trip. Um it was like people were you know playing with Deli for the very first time. Um you know this was when there was no public access yet. Um starting to build with GPT3.

Uh and so our founders loved it both for like the inspiration element of like oh my gosh we are in the belly of the beast of the thing that is building the building the magic but also like from a very tactical perspective like here's how we should be using this stuff to transform our businesses and so founders loved it.

They asked you know can we bring this back next year. I honestly I'm not like a party thrower. I like hate throwing events and so I was like oh man do we have to do it again but everyone wanted to do it.

So we um we did our first kind of nonopenai specific event um and more across the entire ecosystem the following year um that was like right after the chat moment and so um this is our third year throwing that event in a row. We've had amazing speakers Sam Alman has spoken at every event. We've had Jensen twice.

We've had even the audience is incredible.

Like um in our opening talk we kind of called out what we viewed as the biggest AI product innovations of the year and you know we had notebook LM deep research sesame like I think some of the biggest like innovations those people were all just happen to be sitting in the audience so it's just like a lot of firepower in one room that's greating um what uh what are the top uh kind of discussions that people are debating right now I mean from talking to people on the show uh this idea of like the pre-training wall needing to move into uh more RL focus techniques to kind of get the next level of breakthroughs.

Is that a is that the right question to even be asking? Were people debating it? And do you have a take on the idea of like this hitting a pre-training wall? Yeah, totally. Um, so we had Noam Brown speak at last year's AI Sense and we had Dan Roberts this year.

They're both on OpenAI's strawberry team and and we had actually gotten a preview of this from Noom actually before he even joined OpenAI. And so, um, he had done a lot of research historically in in AI gameplay.

And if you if you take the lesson from Go for example with Alph Go, which I think was one of the seminal results in in reinforcement learning, um the top humans are like at 3500 ELO at Go, uh the best bots are like 3,000 before you give them access to inference time compute.

But if you if you actually let the model actually sit and think for a minute before it places its piece, you can get that ELO up to 5,500 points. So like way better than superhuman.

Um and so the key insight there is like um to get roughly the same order of performance once you've like it's diminishing marginal returns on on multiple vectors right but once you've kind of like hit diminishing marginal returns on one vector pre-training if you start scaling post training from there um and sorry specifically in time compute you get like 100,000x scale up in performance there.

So we're just starting to climb that second curve.

Um, I think OpenAI deserves a ton of props for like seeing that and like investing decisively behind it because I think if you talk to a lot of researchers in the e ecosystem, like one, a few years ago, it wasn't even obvious that the lessons from AlphaGo could even apply to the LLM world.

And so, you know, like if you went to visit research labs a few years ago, it was like there's the RL group and there's the LLM group and they're not the same people and it's like it's very different.

And I think I think Noam and um various other people like really pushed forward that vision and I think Sam invested heavily behind a reasoning infrastructure because the hard thing is like scaling up reasoning infrastructure is different from scaling up pre-training infrastructure.

Um and when I talk to my friends a lot of a lot of them have joined certain labs like OpenAI XAI that have really invested ahead in in reasoning infrastructure because it is such an important vector for scale.

Yeah, I was always wondering about the Alph Go uh pre-training kind of scaling law and wondering like okay, we have all this major compute. What happens if we go back and train Alph Go on a 100,000 H100s? Like are we going to get even better or really have we actually topped out?

And it sounds like it sounds like we basically did and we like kind of learned that lesson. But is that a refutation of scale is all you need?

um kind of the bitter lesson or do you see it as just a continuation of that theme that we will need to continue continue scaling and it will just be new new algorithmic paradigm on top of what we have and then scale that and then another one and then scale that and then another one and then scale that because it sounds like you know when I hear a 100,000x improvement in test time compute or inference time compute uh that sounds like a lot of data centers.

Yeah, it's it's a lot of data centers. Um I'm very much an AGI maxi like pro bitter lesson. Um and I think that this is, you know, just another vector that we're going to scale on. And it's not like pre-training is dead, right?

It's like you kind of like, you know, from an economics 101 perspective, you go to like where's the lowest marginal cost of the incremental uh unit of intelligence. And so right now, a lot of that is on reasoning, but I think it's going to break to other to other vectors as well.

Are are you seeing that in uh image and diffusion as well? It it feels like the images in ChatGpt uh is doing something different. It feels like they're layering a few different techniques together.

I was playing with the text and trying to get like the text is so good now, but I was trying to get like a snake to weave in and out of the text and it was kind of getting confused and I was like, I feel like there's some layers going on here or something.

I'm trying to like kind of understand it and I'm just wondering like uh you know we might be past just like the big transformer paradigm of LLM and text responses in the reasoning era. Um are we are we evolving past the big diffusion model in image generation as well?

Um so I'm not a researcher but I get to talk to a lot of smart researcher friends.

Um my understanding is that it's a it's a combination of a transformer diffusion architecture and I think that most people don't believe that diffusion models will will fully get us there whereas transformers have a lot more juice in them. Sure.

Um and so even if you look at it's not just image, it's video, it's robotics, a lot of those have transformers as their backbone. Yeah. Um but I do think like you know there's so much that's happening in like the harness around it, right? What is the you know what's the for loop that you run the model in?

What what tools do you give it access to? Um, we did a little poll at AISense of like what what innovation is going to drive the the most amount of progress in the AI ecosystem in the next 12 months and like uh biggest answer was MCP and and tool use and forming an ecosystem around that.

So I think like the models themselves get smarter but they're also surrounded by a big ecosystem on MCP. Uh how are you viewing that as a position in the market? Is it just a standard? Is it just an API? Is it is it a framework or will there be companies that build around it?

Is there are there going to be open-source frameworks that then we find a Red Hat Linux of MCP and it winds up being a big company even though it's mostly open source? Um, how are you thinking about that from an investor's perspective?

Yeah, I very much see it as a protocol and and like something for the industry to standardize on. And so I think there are there are obviously some benefits that occur to entropic from having uh steered that. Yeah.

But I think it being an open standard is really important and you know that's why a lot of the big other a lot of the other big model labs are standardizing behind it as well. Um and so I think it's a net positive for the ecosystem. There's a bunch of startups spinning up trying to make money off of it in some way.

Uh I don't know if I'm really bullish on any of them really having a shot at it. I mean I think there's certain people that have you know if you have like an infrastructure advantage and for some reason like Cloudflare for example is making a big play. Um then like maybe I buy that.

Uh, but if you're, you know, a small startup trying to spin around spin up a MCP shop, I I just don't really see the the right to win. Yeah. Maybe the value accrrews to like McKenzie coming in and saying like, "Hey, we're going to help you implement an MCP server for your existing Fortune 500 company or something.

" Was there any interesting conversations around benchmarks and do you think they were talked about more or less than the last year? Um, I would say like this is an audience that like very much, you know, knows that the benchmarks exist and like doesn't really care too much about them.

Uh, we ran a poll of like, you know, if you could only use one model for the rest of your your life, what would it be? And like OpenAI by far number one, more than 50% sure, even though like if you look at where they are on the model leaderboards, they're like um, you know, they're not there on LM Arena.

Uh, and so I think there's a little bit of a the benchmarks aren't really, you know, they're saturated. They're not really the vibes test. I think people care a lot more about vibe test right now. Well, yeah. And just end user value, right?

Well, speaking of benchmark, do you have a take on the Manis investment that's kind of burning up the internet right now? Uh oh, man. Seems like an odd choice in 2025. I take I'll give you I'll give you a hot take, John. Apparently, the US Treasury is examining benchmark uh capital's ties to Mana.

So, uh, feel free to pass on the question if if you don't want to talk about it. I will say they've built really cool tech. I think like the devil's in the details for like what exactly, you know, you know, where's user data, etc. And I would imagine that they did their homework, but I don't know. Yeah.

Um, switching gears, uh, the the big story this week around OpenAI, obviously after the event was the, uh, you know, new CEO, uh, specifically coming in to focus on applications. I'm curious, you know, if you could highlight any of the kind of conversations around value acrual.

Um, you know, Sam, I think, has been pretty explicit in the past that if you're building products with the assumption that the models are going to continue rapidly get better, uh, you're probably in a good spot. If you're not, you know, maybe you're going to struggle or, uh, get made redundant at some point.

But I'm curious what the um general kind of vibe was around around that. Yeah, I would say like I mean you know the mimemetic cry in the in the venture ecosystem right now is just like the value is in the application layer. The value is in the application layer. I actually have a great meme on this.

We we had the um the meme on like the value is in the application layer. And then we had Jensen in the audience and I just had a picture of of Jensen on top of Scrooge McDuck just raking in all the dollars.

Um but like I I agree like we very much think values in the application layer and it depends how you want to play it, right?

Um I think that like there's there's going to be a place at least in the near term uh for uh vertical agents applied to a very specific sector and like we've we have a bunch of those companies in our portfolio, Sierra, Harvey, Open Evidence.

Um I think that what the foundation models have proven though um which was debatable a couple years ago is that they have every right to win the application layer. Um and so like it wouldn't have been obvious that you know a company building um foundation models could like figure out the application magic.

Um but like chat GPT is like a to me it's like a runaway freight train uh in terms of consumer adoption. Some of the metrics they've published like 300 to 500 million weekly activives uh year to date. Um, it's just it's phenomenal user growth.

Um, one of the things that Sam shared at our conference in terms of how people are using chat chip was really interesting to me. Um, uh, if you're old, you're using it as a Google replacement. Um, so I'm old.

Uh, if you're in your 20s or your 30s, um, people tend to be using it as like more of a life advisor, life coach type thing. And then if you're really young, like the youths are using Chat GPT as an operating system. And I found that framework really interesting.

And you know, especially in combination with like they're clearly building things around memory, um around tool use, around connecting to your your other applications, it really does feel like, you know, if you're if you're a young person and like really really connecting chat GPT and like mindmeldding with it, um that use case seems really interesting to me.

And so like if you if you you know, if you think about Google as front door to the internet, $2 trillion America company, right? Um, it feels to me that chat GPT is in many consumers minds that front door to AI.

Uh, and as what AI can do, uh, as the ceiling on that goes up, we just like each deepen our our product usage. Um, a couple years ago at the first AI sent, I posted this chart of like the ratio of daily to monthly active users uh, for chat GPT and um, some of the other kind of mainstream mobile apps.

And the punchline at the time was like usage is terrible.

like people are like it was like a 14% down if I remember it was like people kick the tires a little bit and then churn um and we've been like tracking from externally from data science signals just like seeing those down ratios increase um it's pretty crazy like down is now in line with Reddit um it's approaching Google levels and so like if you think about Reddit being like a super engaged like you know you're in there having conver like multiple conversations you see that same behavior both anecdotally and in the data for chat GBT and so um to your question world value crew.

We think it's in the app layer. Yeah. Um I think a lot of the horizontal app layer opportunity will be uh will be won by foundation models like like OpenAI, like XAI. Uh and then a lot of the companies we're backing are going after very specific vertical opportunity.

Are you particularly bullish on enterprise application as independent?

I mean we I'm thinking about like Google very much one consumer search but you know in legal there was Lexus Nexus and then Palunteer helps you know the government's search through data sets and there's there's all these different enterprise use cases that Google wasn't able to go after.

We were even talking about Armada which is an enterprise like you know uh service on built on Starlink.

the Starlink team has dominated in consumer but uh you know the enterprise needs of certain enterprises is just too unique and so there's actually a business to build there and that seems like it maps with your strategy most recently but what do you think totally totally I think the um the enterprise war feels like it is like you know we're in the first inning still um it's not clear who's going to win yet I think you know in our portfolio like Glean for example has done a really amazing job as like a horizontal chat like platform that kind of connects to all your enterprise data um but then it's like a question of like How deep do you need to go?

Like, you know, Harvey has a ton of legal case specific data or Sierra has a ton of like customer support specific um data on workflows. And so, I think there's, you know, the enterprise AI battlefield is very much a a work in progress at this at this time.

I think like the shape of workflows and problems is like so diverse in the enterprise that like my guess is a lot of these companies will be successful. Yeah. How are you thinking about humanoid robotics? I've talked to there's a lot of high-flying companies that feel like it's mostly renders at this point.

Then there's some really amazing researchers working on stuff. Uh I'm personally waiting not to see the demo of one humanoid walk around because we've seen those with Boston Dynamics for decades.

I want to see the satellite photo of the data center that's getting built out to do a massive training run for an endtoend robotics model and I haven't seen that yet.

Is that the right signal to be looking for for takeoff in humanoid robotics or should there be something else that I'm tracking as we go into this like humanoid roll out which it feels like it could be tomorrow or it could be two decades away. It doesn't feel like tomorrow to me. I was going to say the same thing.

I'll be generous. There's a lot of share the details, but let's just say I've um I think the humanoids are a lot closer than than we may think they are. Okay.

Um, and it's like I I thought this stuff was science fiction and I've talked to a lot of smart people who have told me and shown me things that have made me realize like, wow, this is probably on the time like in I would guess in like two or three years this stuff will be in tens of thousands of households at least, maybe thousands.

Yeah. Wow. Um, and uh that actually echoes kind of like the timelines from that I think Sam set on stage that you know um we had Jim Fam from Nvidia on the on the show.

He like he's like, you know, there's the there's the digital touring test of like you don't know if it's a human behind the uh the computer or not or not. Sorry, a computer uh a human behind the screen when you're talking to it.

He's like the the physical touring test is, you know, when you leave your house and it's a mess in the morning and you come back and it's like it's all perfect and and all cleaned up. Um and like he also thinks we're we're about to get there.

Um for him like a huge part of the breakthrough uh is is going to be synthetic data pipelines. Sure. Um and so like robotics unlike LLM like you just don't have internet scale data to train on.

Y um but like one of the amazing things about what's happening what's happening in AI right now is just like with LLMs but also more specifically with these generative world models you can generate you know tens of thousands of variations of the same environment to simulate and these robots can get better and break through that data barrier extremely quickly.

And so I, you know, I was personally like a robot bear for the longest time and and you know, talked to a bunch of very smart people in robotics in the over the last week and I've flipped. I feel like the humanoids are coming.

And in terms of in terms of uh uh capital intensiveness of like actually getting there in a few years, obviously it's very expensive to build a factory that produces robots.

Do you think we're also going to see raises from humanoid robotics companies that are uh where you know a ton of the raise goes into Nvidia GPUs to build a huge data center to train some massive model because yeah I agree with you on the synthetic data you could wind up with webcale you know trillions of tokens like we've seen with the GPT4 training run I think it's going to be extremely capital intensive and it reminds me of autonomous autonomous vehicles um five plus years ago go uh where like you know at the end of the day like we have we have Whimos and we have Teslas driving around those those companies had enormous economic engines to support the development.

I think the same is happening with humanoid right now. Uh and so my guess would be it's like it's going to it's not just Nvidia GPUs it's everything right because you're you're co-developing the hardware um you're you still have to collect a ton of data on on actions and so it's it's it's it's a ton of spend everywhere.

Yeah. I mean, huge trends in AI broadly. It feels like all the metrics are up and to the right. At the same time, valuations are very high. What's your overall take on the venture market? Are we in a bubble right now? Can there ever be too much venture capital? All the key questions. Never, John. Never.

There is there's too much venture capital already. Uh um look, I I'll say a lot of companies are raising on like what I'd call Vibe revenue right now. and like it's like it's like it's you know pilots being counted as revenue. It's like really really terrible retention stuff.

And so like um once you peel past that I think there's like a cohort of companies that are like growing high quality revenue at the highest pace that we've ever seen. Um and that includes like I mentioned OpenAI before but it's also companies like Glean and Harvey and Sierra and and all these companies.

And so um to the extent like valuation is a function of like how much have you d-risked, how much product market fit do you have, like what is your what is the growth rate of your of your business and what is the ultimate TAM potential.

I think these AI companies are demonstrating just, you know, growth rates outside of what we've ever seen before. And then TAM potential like because it is very much you're selling into, you know, if you're able to get outcomes based pricing, you're selling into a into a services replacement, not a tools replacement.

It's a it's a TAM in the trillions, right? And so, yeah, sorry to interrupt you on that point.

something something I'm curious about is is so yes if you have um an AI tool that can uh replace services spend you can capture some you can whatever ideally capture a lot of that you know market but the thing I keep coming back to and maybe this isn't the right way to think about it but you're you're not simply competing with uh end humans that are delivering that services a company will also be competing with other AI tools that have you know a similar cost structure.

So, does that not over time just drive the the the sort of dollar amount that you can capture just down to something that looks more like a software market. We have this debate all the time. Um, so I'm glad you bring it up.

Um, I think it really depends like if what if what you're doing is like, you know, really low switching cost, really low differentiation above what the models provide, like yeah, I think that margin's going to get competed down.

Um, and so I think that's why we've we've historically debated a lot of these GPT rapper companies. I think that, you know, if you're building something that's really hard to build, um, or that's integrating into a customer base customer base that wants to like choose an AI champion and and move on with life.

Um, which by the way happens like we we just did a bunch of references in the healthcare transcription market. Like you talk to these healthcare CIOS, they're like, I'm choosing one transcription vendor. I'm not repping that thing out for the rest of my life.

Uh, and so I think there's there's nuances to the stickiness of these things. Like I work with a company called Gong. Um, and they do, you know, they're a sales AI company. And the theory was all like always like transcription. Love it. We love Gongs on the show. You baited us. It's a huge part of the brand.

Oh my gosh, it's amazing. Um, and they have such a quirky brand, too. There's so many gongs around their office. But like the theory was always like transcription should commoditize, and I think that very much hasn't happened. And like sales teams standardize on them. They standardize their processes on them.

They train all their reps on it. And like it has all your data. So like I think the the theory of how you build build modes is um is different from like rubber meets the rubber meets the road in terms of like how these companies in practice do build the moes.

But like I think you are in a run like hell business because there's so you know there's you know we we backed it we backed an AI DevOps company uh like an AI troubleshooter. There's four other companies that are trying to do the same thing right now.

And so like um we are kind of in like a run like hell segments of the market right now.

How do you think about private equity stepping into the AI race, we've seen a few venturebacked approaches where the ideas like instead of the Harvey approach, let's buy law firms and we've seen even in the pre about a decade ago Justin Khan was working on Atrium this kind of like tech powered law firm where it was a law firm uh but Harvey's made the choice not to.

What is your take on uh private equity dipping their toe into more venture scale opportunities and venture investors starting to look more at private equity style rollup deals? Yeah. Well, so I used to I came from private equity, so this is something I think about a lot.

Um I think it makes a ton of sense and it's a continuation of the private equity play, right? A lot of the investments I did in PE only worked because you took 20% of the cost out. And so like now you have a much better tool to go and do that, but like it very much is the playbook and it's what they're bestin-class at.

And so do I expect that they'll be great at adding, you know, AI to the arsenal for how they get those margins up? Absolutely. I think that, you know, when I think of the businesses that I'm excited to invest in, it's like, okay, at the end of the day, it's who's who's creating gross profit.

It's grow gross profit dollar creation. And you can choose to do that by investing in the billion dollar revenue company and taking their cost down 10%.

or you can choose to do that by backing, you know, that amazing DevTools founder that knows how to build like the AI native um DevTools company that's going to create a hundred million dollars of right off the bat.

And so like I very much personally like us in the founders that are kind of creating new revenue dollars and gross profit dollars, but like multiple ways to play.

I will say like, you know, I had to really retrain my brain when I went from private equity to venture and just I mean everything you like the way you operate is just so different. And so I do think it takes a different type of culture uh to operate a rollup um or a cost out strategy versus invest in startups.

And so while I agree like I agree with the strategy of PE firms doing their thing and venture firms doing their thing, um I have a question mark on on uh the the blurring of the of the core competencies.

How do you think about the different businesses that Sequoia's um running right now from early stage growth stage beyond? How do all these uh things play together in the strategy? Uh we've seen some venture firms even dip their toe into general catalyst buying a hospital network.

Some lots of people are thinking outside the box these days. There's the crossover funds. Um what do you think Sequoia does best and what are you excited about in the future? We're not buying any hospital chains yet. Okay.

Uh I would say like if you think of our strategy it's like it's seed to IPO and beyond for like the most ambitious entrepreneurs in the world. And so um sometimes we're able to catch them early at the seed uh like Airbnb like Stripe. Um and sometimes we catch them later on in their journey.

But like the the point of adding additional kind of pools of capital to our fund strategy has been when we find a winner in our portfolio. Um for example, take like a SpaceX. We want to be able to invest a lot of money behind that company as it as it goes um on its journey.

Uh and the reason for the Sequoia Capital fund is like even after these companies go public, we think a lot of that return is still to be had.

And so we've we've modified our structure over the years to be able to kind of support these companies as they as they grow and become later stage and go public, but ultimately it's you know it's invest at the earliest point of conviction and and ideally that's at the seed. Makes sense. Jordy, do you expect to see more?

Uh Sam had a some interesting sort of uh quotes over the last week or so. I don't know exactly when they were happening. Talking about uh the cost Sam Alman the the Sam other Sam which other sorry sorry to the other Sam lesson. Yeah. Yes.

Um no he he the the quote to summarize it was or the line was something to the effect of the cost of AI or the cost of intelligence will just uh converge on the cost of energy or electricity.

I'm curious, you guys talk and I'm curious if you think that that is a potential area that you expect to see more net new early stage startups exploring because it probably hasn't got enough. You used to have nuclear and were we talking with this with Shawn Magcguire?

We were saying like there is no Elon of energy yet, but it feels like the last massive massive market that no tech founder has really gone and dominated in kind of the founder mode way. We were talking about big oil is still bunch of huge companies. Can't name any of the CEOs. They're not really in founder mode.

They're kind of boring and maligned and it feels like there's an opportunity there. But yeah, sorry that's a lot. No, I mean that's a great question. We uh we had Chase from Croisso on our on our podcast and I I will I'll put in a bet that Chase might be that Elon like figure.

He shared some stats that were amazing to me and it's like I'd always kind of thought about AI from the oh I can generate cool Jubilee images perspective but I didn't realize the extent of like the sheer extent of the industrial buildout that is happening to support all of that.

And so Chase shared like if you look at like typical data centers today like 20 40 megawatt data centers like the biggest data centers of the world are in Northern Virginia or sorry in the US are in Northern Virginia. the aggregate capacity is four and a half gigawatts there.

Um Chase at Crusoe himself has 20 gigawatts in pipeline right now, more than two gigawatts built out. And so like the sheer scale of the buildout right now is just like nothing that we've supported in the past. Um and the bottlenecks are moving around. So like people it's actually impossible to get chips now.

A lot of that is is easing. Um and power is a new bottleneck. And so that's this is why there's so much happening in in West Texas right now in Abalene. Uh where they just have this like massive overbuild of renewables especially with wind.

Uh and so like I think very much you'll see a lot of the AI buildouts uh following uh following power and energy because that ultimately is the binding constraint right now. Makes a lot of sense. We'll let you go. This is this has been fantastic conversation. Went all over the place, but we'd love to have you back.

This is so amazing. Come to our AI party next year. And I heard you were asking Andrew about swag. We have these scented We have these amazing scented candles. I actually have one on my desk. We have these scented candles. I'll send one I'll send one your way.

It was evidence that I didn't know our audience at all, but I I enjoyed the scented candles very much. I love scented candles. Mother's Day is coming up, so you know, I know your your um your segment on the Himalayan Birkin, I was like, I love these guys. Yeah, they're going to be flying off the shelves. Yeah. Yeah.

After we do the show, everyone's going to go out and get one. Anyway, thank you so much for stopping by. We'll talk to you soon. Have a great Friday. Uh, next up we got Will from Slow Ventures. Uh, I'll be right the other side of uh of Slow Ventures, Sam Lesson's business partner.

Obviously, he's been on the show many times. We had to swap him out. We're we're replacing Sam with Will from Slow Ventures. Um, very excited to have him on the show. Uh, I've been digging into a bunch of those questions. I still want to know more about the robotics timeline. I'm going to try and dig into that.

I still need to know how I need to talk to more researchers about how images in ChachiPT works because I feel like there's something going on there. Uh you know, you see it with the uh with the text models that there's very clearly uh you know certain filters running on top.

You get these weird rejections with the images where sometimes it will like just the Studio Giblly thing is bizarre because Studio Gibli is real intellectual property. Studio Gibli is a real company and when you say that it doesn't say, "Hey, this violates our our intellectual property rules.

" But if you ask it to generate a picture of Superman, it'll say, "Hey, that's copyrighted. " And so I'm wondering if OpenAI did a deal with Studio Gibli behind the scenes or something or maybe there's some definition of how the IP shakes out. But um hopefully going to have a lot more AI