OpenAI CPO Kevin Weil on Codex, GPT-5, sycophancy postmortem, and why personalization is the next frontier

May 16, 2025 · Full transcript · This transcript is auto-generated and may contain errors.

Featuring Kevin Weil

doing? This is wild. I just like joined a Zoom and I'm live with you guys. We're live. Yeah, it's great to have you. Welcome. I was watching you on X and thinking how weird it was that in 30 seconds I was going to be on the screen. This is awesome. Uh yeah, we're leveraging this thing.

Uh it's called the internet and technology. Um but we we'd love to get you up to speed on it since uh it's funny a funny story. Uh, so Tyler Cowan called into the show before one of your guys' earlier releases and he was like, "AGI's here.

" Like he was basically saying like, "It's a couple days out, but he couldn't get his camera working.

" And it was this like funny funny dichotomy of like AGI's here, but like the internet is still actually There was actually another day when we had to basically take the whole show down because Zoom and both both Zoom and Google Hangouts just went completely down like nationwide.

And so we were like, well, we can't do our show now, I guess. The running joke is that audio video is like ASI complete. Yes. Well, everything else in the world is solved. Yeah. Wait, so I mean I think uh long term for the show, we need to go proprietary. We need to build our own video streaming stack.

Can you can you help with that? Codeex. Codeex. You know what? You know what can help with that? Yeah, codeex can help with that. Break it down. How would I use codecs? And by the way, you get to talk about AI and then you're going to have Blake from Boom here. So, you get to talk about supersonic flight afterwards.

You guys have the coolest lives ever. It's full stack. Full stack technology. Um, but yeah, c can can you can you take us through the announcement?

Uh, break down exactly what launched, when it's available, to who, what you're excited about, and then we'll go into some of the trade-offs in the product design and development. Yeah, let's do it.

So, we just launched this morning uh Codeex, which is a cloud-based software engineering agent that can work on many tasks in parallel. So a lot of folks are used to the kind of cursor winds surf style, you know, GitHub copilot style of AI development, which is really more about augmenting a single engineer, right?

You're you're writing code in your IDE and you can press tab tab tab and it'll autocomplete and you're, you know, 10 20 40% faster. Yep. With Codeex, you actually have a software agent that runs in the cloud that can do entire tasks for you.

So, you give it a task, it goes off and does it, and suddenly you have a PR and you didn't work on it at all. Um, and it's powered by a version of 03 that we fine-tuned specifically to be really good at these kind of hard software tasks. Uh, we're super excited about it. It launched today inside chat GPT.

So, you can use it if you're it's rolling out now to pro, to enterprise, to teams. Well, it'll make its way to plus in the coming weeks. Mh. But the cool thing is it's a it's an agent. It's a software agent in the cloud. So, you know, you're using it today from chat GPT.

You can imagine using it in the future from, you know, your terminal, your IDE, all kinds of other places and even, you know, hooking it up via API to your bug cues and just having this software agent churn through every single bug that you have.

you know, for each one, look at the context of the bug, understand your codebase, and then suggest proactively how you would fix that bug and give you a PR that you can review. So, the the world of software engineering is changing. We're super excited about it. It's a research preview.

It's not perfect yet, but I think this is the future. Yeah. So, uh I mean, I've already noticed that uh Chat GPT has been writing code for me for a while. Uh it seems to be writing more and more code.

I was telling Jordy about how I wanted to know the height of a desk in an image and I knew how tall a person next to it was. And I thought that it would just use uh images in chat GPT to kind of oneshot this or guess. Uh but it wound up spinning up and looking at the individual pixels with a bunch of Python code.

I think it wound up writing like 5,000 lines of code or 500 lines of code and it got it really right. Uh unsatisfactory because it was just an average size table. It was like literally the standard size, but it really knew it.

Um but but so I'm wondering like like is this something that will in that will feel like a deep research project where or deep research functionality where I click a button to say hey let's use codeex for this I'm giving you the hint or is this something that can be automatically triggered just from a text interaction like images and chatbt yeah so it's a I mean by the way you know you never know if that desk was actually like 38.

2 two inches. So yeah, it's worth it totally worth it. Write the 500 lines of code. Yeah, why not? You know, it's too cheap to meter. Actually, that's been one of the big the coolest thing about 03.

03 personally for me has been a kind of feel the AGI sort of moment using that model, the things that it can do and it's a lot of it comes from the fact that it can use tools while it reasons. Yeah.

So it's thinking and in the process of thinking it can do some web searches and then it can take what it learned from a web search and write some code and then after that code it can do image analysis and then it can write some more code and do another web search and then finally with all of that context that will output the answer.

It's like it it's really been an unlock for a huge number of use cases and that's the kind of thing that enables codecs here because you're right chatbt has been able to write code for a while right you can you can just go to chat GBT and type in like here's a you know write me code to sort this array of integers that I have and it can give you the code and it's going to be great at it.

The difference here is codeex is built to work on big complex code bases. So you're not just saying like do this little task for me, write this function. You're saying I have a bug and I don't know where it is. There's a, you know, 100,000line codebase.

Can you please go understand my codebase and and try and fix this bug? Or I'm a new engineer at a new job. I'm trying to understand what the heck is going on in this codebase. Can you explain to me where the code does X or Y or Z?

And very quickly, it'll look through the code and give you an explanation of how something works. Um, you know, it's funny. I've even seen examples where people go, "Hey, codeex, find a bug in this codebase and fix it. " Just find one randomly. Just go. That's amazing.

Complex stuff and do do hard work on a huge amount of existing context that differs from what you've been able to do in chatbt for a long time. Yeah. Um, wild. From a personal perspective, I I used to write Python pretty regularly. I haven't written much code, so I haven't really gotten into the cursor winds surf world.

Uh, obviously I've been writing code via chatbt now. Uh, I noticed recently I kicked off a deep research report and it's and it prompted me to put it on effectively a cron job. It was like, do you want me to just run this for you every week? And I said, yeah, that sounds awesome. Uh, that's great.

Uh, but I if I'm like a I don't have a repo, but I could set one up. Is there a world where me as kind of like a proumer non-technical user should set up a repo to house the custom code that Codeex writes for me to make a better experience for all the little custom software and tools and random stuff I use?

Or should I just live in the 03 world where the code is pretty much ephemeral? I think it depends what you're looking to do. You know, for simple things where you just want to like quickly put together a script or something, writing it inside chat GPT and not using version control and all of that is fine.

But if you're if you're going to do something, you know, that you expect to be longer lived, like it's the basis for something you actually want to build for yourself and maintain, then I think setting up a quick GitHub repo and using codecs on it makes a ton of sense. By the way, I I so I used to be an engineer.

I haven't uh you know I still dabble and screw around on the side and write code but nothing major and I hadn't written any code at OpenAI. I've been there about a year um and haven't checked in a thing. Yeah. like Tuesday Tuesday Tuesday night I think.

Um I was I was doing you know the rest of my work and I was like I want to fix a couple bugs and so I went and found a couple really basic bugs uh because I didn't want to screw anything up and sent codeex off to work on both of them in parallel. Check back in a few minutes and I had two PRs. They looked right.

So I submitted them, got them code reviewed by somebody, you know, who's actually a good engineer, and they were submitted, and now I've got a couple commits in the codebase. So it's just it's like this is stolen valor, though. This is stolen valor. You didn't write that code.

But it's like I I actually just kept, you know, other than just like getting to play around with the product and and offering a little bit of feedback on on a couple things, I was off doing the rest of my work. And I had this software agent working for me in the cloud writing code. Yeah.

Talk about I I'm I'm so curious to hear about the kind of internal testing process and when when you guys decided the right time to actually roll this out as a research preview because I imagine you've been using codeex in one form of another internally for like a very long time.

maybe didn't have a name or anything like that, but um I'm sure that chat GPT has been, you know, contributing to the effectively the chat GPT codebase um you know, almost since the beginning in some form or another. Yeah, for sure. I mean, we're we're big users of our own tools.

There was a there was a version for a while that um that was mostly about the kind of how do you come up to speed quickly in a big codebase that uh was good at at understanding our codebase and answering questions about it that was really popular with new engineers on the team and then once you kind of you know get to understand it you might not use that tool as often unless you're exploring a new area of the codebase.

So that was like a proto version of this. Yeah. But we've been working a lot over the last six months at improving the ability of our models to code. Like you've got GPT 4. 1 which we released a little while ago which is kind of um which is very quickly become a a really popular model.

It's now I think default and wind surf.

uh it's increasingly uh a large percentage of of cursor users coding and you know that's that came from focusing on the things that matter in creating a really good coding model that you can rely on means really good instruction following longer context you know the ability to like not just make the changes but to make the changes a way a developer would so don't add a bunch of extraneous stuff don't add weird comments you make surgical precise changes that accomplish the job um So there's a style element to writing good code, not just a correctness element.

And we've been focusing on all of this. And then you you kind of bring that together with 03 and all of the the things that we were you know that 03's ability to tool call and to reason and um and and suddenly you can put together a really good coding model. So we've been thinking about this for a long time.

This is the first time when we're like, okay, this is now good enough that we think it it deserves being a product for the rest of the world and we're excited to see how people use it. Can you talk a little bit about product design, product like inspiration in product design?

I I noticed like the very first iOS app had these incredible haptics when the tokens were streaming through that I hadn't really seen anyone do. I just opened the app last night and saw that when you're using voice mode to dictate to it, it has a different modal now.

It feels like there's a very strong design language evolving. At the same time, there was, you know, people complaining, oh, I can't even I'm using Chacht. I can't even stay logged in. That that bug obviously got crushed pretty quickly. Um, but but what what is the actual product design inspiration?

Are there people that are pulling from certain uh schools of thought or anything or like is there someone driving that internally or is it just like baked into the culture? Yeah, for sure. Ian Silber leads our design. I was fortunate to work with him at Instagram. He's an incredible designer.

It's building for for Chat GPT is a really interesting thing because we have, you know, well over 500 million weekly active users at this point. So, it's a it's a big scaled product. Yeah. And so for products of that size, you one of the most important things is to simplify, right?

You're not just serving power users at that point. And so you're serving people that are just trying to get something done in their day. They don't want they don't care about the complexity. They don't care what the models are called. They just have a task and they want to complete it and you want to help them.

But then on the other hand, we have folks who are super deep AI enthusiasts and want to, you know, digest and every single new model that we use and try it out in all these different ways. And so we want to both and we want them to we don't want to sort of uh soften the edges of their experience.

We want them to to be able to do everything they possibly can to experience all of you know the power of AI. And so we both want to like simplify the experience for for a lot of our users and we want to provide the people that want it all of the bells and whistles. And so we try and we try and balance that.

Um you know so we we try and make it so that you don't need to worry about things like the model picker as much. You don't need to like have a bunch of AI knowledge in the background to do what you want to do in OpenAI or in chat GBT.

But if you have that, you should be able to expose the sharp edges and like test the different new features and stuff. And so we we really actually try and get both of those things right. And it's a delicate balance.

How do you is that why is that why you guys don't seem to put too much emphasis into perfectly naming products just because in the on a long enough time horizon it doesn't really matter.

I just come to, you know, chatbt and I work with it to get the outputs and the results that I want and I'm not regardless of my experience level, I don't necessarily care the underlying sort of models doing the actual work. Wait, are you saying are you naming isn't great?

I was I was alluding to No, to be clear, we think that you have the second worst naming after Behemoth, which is a very very untameable, you know, demon demon monster, potentially a disaster.

So, we are now long 40 and 03 and we just I just, you know, if you look only six months ago, a new model would get announced and people are like, "Oh, it's so confusing, blah, blah, blah.

" But it just it seemed to me, you know, as an observer that you guys like it wasn't like, oh, this is a problem and we need to fix it. It was just more so like let's just keep making really great models and make them easier to access in really intuitive ways. Yeah.

I mean, in all in all seriousness, it comes from our focus. We we have this principle of iterative deployment that we really believe in, which is that these are new these models are new systems, right? Each one, each model has capabilities that we understand somewhat and and also we discover new things about it.

And we believe that no matter how many smart people we have inside of our walls, there are way more smart people outside our walls. And the best thing we can do in a world of AI evolving so quickly is to kind of co-evolve with society to ship stuff early and ship often and, you know, learn together.

And so one of the reasons that we have this explosion of models is we're trying to build new capabilities rapidly. And sometimes the easiest way to do that is to kind of build it into a new model that's really good at one specific thing or a handful of specific things but can't do everything.

And so you end up with this like profusion of models that do different things well. Like 4. 1 is really good at coding and instruction following but it's like not as chatty. And so if you're asking about other things, you might prefer 40 for some things and 4. 1 for others. Seems totally natural.

You'd go like, well, why don't you just build one that's, you know, good at coding when you're coding and good at chatting when you're chatting. And we will do that. That's what we're trying to get to with GPT5, where we're trying to bring more of these things together.

But if we tried to do that from the beginning, we wouldn't have been able to launch as fast. And so we've we've opted for like launching fast having a little bit of you know confusion that comes with it but we learn faster and then over time you sort of integrate and simplify.

I guess like the the meta question though is like is the future just you're already using mixture of experts within the models. Is the future like a mixture of mixture of experts models?

And so I I I go to one there's one command line text is the universal interface not drop down model pickers and and and it routes me it says hey this person doesn't want to chat they want to write code. Okay we're using 401.

Uh yeah, that seems kind of logical and already this is kind of happening with the model picker becoming like just the UI is getting less and less in your face and it's a little bit easier just to have a natural uh uh interaction. But how do you see it evolving?

Yeah, I think over time the the capabilities that that have existed for a little while, you sort of learn how to bring them into a general model. Yeah. And then but you're always going to have these new frontier capabilities. Yeah.

that you're going to want to be able to iterate on really quickly and you might want to do specialized things to to make the model really great at some new frontier capability. And so I think yes, ideally you have a you have a a sort of model, you know, a layer above the models that's doing the choosing for you. Yeah.

But that is, you know, it's a hard problem, especially it goes back to the the building for simplicity versus building to enable power users. Sure. As a power user, you might be the only one that knows for a particular question you're asking.

Whether you want a 80% good answer immediately or you'll wait a minute for a 95% good answer or whether you want to do deep research and wait 20 minutes and get an amazing answer. Yeah.

I mean, you could theoretically like train the user on that a little bit like like it's also like how you work with the comp that I or like my personal framework is like when you're working with people on your team or teammates, there's certain people you'd work with that you would have to explain effectively like the exact tool set that they should use to accomplish the task.

You should get the CRM and you should go in the CRM and do this and that. And then there's people that are maybe have greater intelligence or experience or context and you just sort of like discuss the task with them and you're not even thinking about the underlying sort of like toolkit to accomplish the task.

It's just sort of this higher level yeah you know conversation. I I I want to talk about AB testing versus personalization. Um when you choose like a default model or the default prompts when you open up chat GBT it says create an image, write a Python script, make up a story. what's in the news.

Um there's a couple options there. Uh there's a lot of personalization going on, but you could also imagine doing AB testing to understand what will drive turn down or retention up. Uh how are you thinking about the balancing act between those two techniques of product development?

Uh I don't think they're really in at odds in any way. We do both. Yeah. So, we we definitely AB test a lot of things uh because we're trying to learn what works and and you know how how we can help people understand this new kind of strange world of AI. It's a funny it's a funny product, right?

You're we're used to products where uh you you have a UI like computers before AI needed very specific inputs like this button does this specific thing and that button does this other thing. And if you wanted to do a third thing and there wasn't a button for it, you probably just couldn't do that thing, right?

But then every time you hit the button, you got the same output. It was very consistent. Yep. LLMs are basically the opposite, right? You can give them input that is the that has the full complexity and nuance of the human language and you have no limits on what you ask.

And then also what you get out is not the same from one thing to the next. They might be substantially the same, but the words are not identical, right? And so it's just a it's a totally different way of building product.

And when someone comes to chat GPT for the first time, if they just hear from their friends, hey, this this AI thing is super cool. It can do all this stuff for me. And they show up at the front door of chat GPT, they're a new user, like what's the mental model?

Because it it flies in the face of almost every thing that you've learned using computers over the last however long. So we we really think a lot about how we get people going uh and how we teach them all of the different capabilities which you know by the way the capabilities are changing every month or two too.

So it's a it's a really challenging problem um but something that we care a lot about because that's you know if you go from uh being a novice chat GPT user to being a power user it can really change your life. It can save you a ton of time. can accomplish a lot of tasks for you and that's only increasing.

So the the upside of us being able to teach people well is also increasing. Yeah. I feel like a decade from now people are going to look back at this moment and realize that the people that fully understood the capabil like the full capability set of the models just had this ridiculous sort of extreme advantage.

It was the same thing with social media. like the people that really understood and took social media seriously early on are like famous now, like actually famous. Uh interesting. Uh what what is your post-mortem on the sycopency thing?

I feel like that like that made news because it was kind of blanketly a uh uh like it was kind of like everyone was experiencing or at least all the power users were experiencing it.

Um, but I could imagine a situation where some some people really like that type of interaction and it was beneficial and it made improved their lives.

And so, uh, if you go to the YouTube algorithm right now and you only search and click on positive content that reassures you, you can have a sick authentic experience and that can be good for everyone involved.

Uh so how do you what is your postmortem on it and how do you think that uh the personalization uh will play out in the future? Yeah, it's a this was a really important issue. I mean we so we the the story for people who don't know we we rolled out a new version of GPT40 which is something we do pretty regularly.

There's we're always you know to your point about AB testing we're testing new versions of GPT40 that are incremental improvements over previous versions. So we rolled out a new one and you know we had AB tested in the past. So we um you know the metrics looked good.

that it looked like a really solid model, had some new stuff around personalization. And then as it got out there, we saw that a number of of use cases, not like super widespread, but enough use cases where we saw the model sort of um overly like like just being some of it was like glazing, what people call glazing.

Yeah, we called it that. I think we used that term. Yeah.

But then but then there were other cases that were more uh more serious totally where someone had real problems and you know maybe they they were having mental issues and the model was sort of validating them in ways that didn't really comport with reality and that's like a that's a real thing and we took that super seriously.

So, we rolled the model back and then um basically have spent the last few weeks diving into uh where this is coming from and what we need to do to um to to make sure it doesn't happen again. And we've tried to be super transparent about it.

So, you know, we tweeted as soon as we were rolling it back and then immediately put out uh a postmortem like, you know, within a day or so and then put out a second after we had done a bunch of deep dives. And so we've gone through, we've like the the team uh did some great work. We've got evals now that measure this.

Um we understand a bunch of the root causes from where this came from. You know, as always with these things, they're not it's never just one thing.

It's like a little bit of this combined with a little bit of that and then this unexpected thing happened and together they created something that that you know wasn't up to the standards that we set for ourselves. Yeah. So uh I Yeah, sorry. Uh I I I just have an interesting uh realization with the product.

So uh we've been hearing this this like this request for feature on social media for a while of like I wish I could just reset my algorithm, start fresh because I feel like it's funneled me in some sort of echo chamber and I don't like that echo chamber and I want to start fresh.

Uh and I don't know if social media feeds actually have that feature. It might just be buried. But I've noticed that with the chat memories uh early on I was really aggressive about prompt engineering and and basically like prompt hacking.

And so to get the best responses if I was trying to learn about trains I would say like I am a world expert in trains. I own multiple train lines and railroads. Give me a breakdown of the market map of trains. Basically lying to it. Uh and then it remembered that.

And so now it's like well as a train conductor you'll probably want to eat this for dinner. And I'm like, okay, I I h I have to back up. I wasn't being completely honest with you, chatbt. But the good news is that the saved memories are there and I can delete them all and so I can kind of reset my experience.

Was but was that a was that a was that a learning from the demand that people are seeing and the and the uh re and the not the stated preference on social media for resetting or do you think that that's important or what other lessons are you learning from from how social media has played out because you obviously have a lot of experience there and a lot of people at the team have experience in social media.

Yeah, it's personalization is a is a really powerful thing that I think we're just at the very beginning of like you want I mean in the same way that we we know each other a little bit. Yeah.

You have you have best friends that you know super well who you're really comfortable with and then you have strangers and you're you know your level of comfort in interacting with them is very different. Yeah. You want your if you have a super assistant in your life in chat GBT you want it to know you really well.

you wanted to know your habits and how you like to do certain things and you know even down to like do you want uh you know more flowery supportive language or do you want crisp analytical tur language things like that um I've I was uh messing around last night with chat GBT trying to do um trying to give my son some math homework and uh he I just said hey can you design 10 math problems for Matthew and the model knew knew Matthew was my son, knew he was 10 years old and developed a bunch of like grade level appropriate escalating, you know, and it was like that was super cool.

And it even it was like, oh, he likes Legos and Kevin, you're a runner and so a bunch of these program a bunch of the questions were like Lego themed and you're going on a run with your dad and this and so like, okay, this is really cool. It's truly making me emotional.

It's like the coolest thing as like a parent to be able as a parent to be able to give your to give your children like a truly bespoke magical experience is like almost priceless even in the context of home like something like homework, right?

And so to that and and even knowing that like kids today like both of our all of you know we we've got five kids between the two of us and um knowing that they'll grow up only knowing the sort of world in which this kind of technology exists where a parent can just like generate magic for them in like seconds is just unbelievable.

I mean, and and think of the world. This thing designed 10 math problems. So, I used 03. It designed 10 math problems for my 10-year-old that were escalating in difficulty across a range of different things.

It could easily, if he was entering in the answers, could easily realize over time where, you know, what what concepts he understood and what concepts he didn't and just become a personalized tutor for every single kid. And remember, I mean, this is free, right? We don't charge for Chat GBT.

You can you can get a subscription, but you can also use it for free basically. You you don't need you don't even need an account. You need an Android phone anywhere in the world for free and you can, you know, get this thing that's starting to be more and more of a personalized tutor.

I just like I think it's incredibly powerful. Totally. Every every study I've ever seen says that when you pair, you know, traditional learning with personalized tutoring, the it's like a standard deviations of improvement. So, I I I'm with you.

I think the future is going to be very different and there's a lot of reasons to be optimistic about what the next generation is going to be able to do with AI. I need to be more honest with Jess. How do you how do you explain the rate of AI progress to let's say like a family member of yours that's not in tech?

That's a good question. Um I think the only way honestly because you can talk about it but it's like you can't you can't get fit by reading about going to the gym. Like you just use it. Um, and so I've been trying to get any of my family members who aren't using it, just just try it.

Start ask, you know, for everything that you're doing, ask why couldn't I use chat GPT for this? And you start to realize that there are more things that you can say yes to there and you can start using chat GBT and then, you know, the more you use it, the more you realize the value and and off you go.

It's really hard to explain in the abstract, right? People go, "Ah, agents, I'm hearing so much about agents. What is it? " And then you use deep research or you use codeex and you're like, "Oh, wow. That just saved me a ton of time or did something I couldn't have even done. " Yeah. Yeah.

You know, that's the promise of the future. I think we're all going to be much more productive. We get to not focus as much on the doing of particular things. We get to focus on the outcomes and, you know, what we do once the once some of the labor itself is is taken care of.

And that's that's a super exciting future for me. Couple more. Uh, talk to us about healthbench. Yeah. So, uh, a huge amount I mean people are increasingly using chat GPT for health. I've done it any number of times. My son had a had a small surgery that was supposed to be, you know, 99. 9% innocuous, 0. 1% bad.

And we got the results back from the doctor before I could talk to the doctor. And they look scary.

uh and uh it it's full of a bunch of medical jargon that even as a you know former scientist I didn't understand and I put it in chat GBT and said this looks weird what is this is this should I be worried and it said oh no no no don't don't worry it's fine and I was like okay explain it like I'm five and it did and you know I couldn't get a hold of the doctor for another 72 hours that would have been a bad 72 hours for me if I was sitting there like stressed out about my son and ChatGBT gave me the piece of mind.

So we always try and say with anybody anybody asks about anything medical, you know, this isn't a substitute for actually seeing a doctor. JBT is not a doctor, but here it is. Here's here's here's the understanding of what's going on there. And it's really valuable.

You know, for all of us, you know, for me, it saved me 72 hours of of anxiousness. For somebody else who doesn't have access to a doctor, it might be a totally different thing. So, um, anyways, we care a lot about this.

We want to make if people are using chat GBT for this, we want to make sure that the answers that it gives are really good answers. And so we're putting a lot of effort into improving uh ChatGpt's ability to act as or to to answer medical questions.

And the only way that you really know if you're doing it right is if you have a benchmark, right? You've got to have something to test against to show that you're getting better.

and we figured if we done a lot of work to put this benchmark together then you know others could benefit from it outside of us and so we we we open sourced it. Last question can you give us the 30 seconds on why you were excited to join Cisco's board of directors. Oh yeah totally.

Um so I mean Cisco is an is an incredible company like iconic Silicon Valley software and hardware company.

uh we all use it every day in in a hundred different ways and they're at this really interesting point because I think AI can transform their business and they can either be transformed like it can either happen to them or they can get ahead of it and build some really amazing software and tools and become you know a sort of an even powerful leader for the next in the next generation and I think that's not unique to Cisco.

I think a lot of companies are at that turning point, but Cisco really realizes it. By the way, they're actually a launch partner today for us with codeex. So, they're one of our early partners. They're looking a lot at how AI can can help them get more done, you know, faster, more cheaply, etc.

So, it's just AI is going to really impact their business over the next uh you know, 3 to 5 years. And I'm excited to be a part of that and hopefully help them navigate uh this transition gracefully. Amazing. We'll have like five more hours of questions, but we will let you go. You go.

We'll come up to most questions can be answered by chat GPT, but there's certain questions you got to go to the source organic farm farm to table. Yeah. Yeah. Well, thanks so much for having me on. It was good to see you guys and talking about supersonic jets that I wish I could stay for that one. Yeah. Yeah.

We'll talk to you soon. Great to see you. Cheers. See you. Bye. Um, we got is the man. Uh but you know what else is the man? Linear is the man. Linear is a purpose-built tool for planning and building products. Meet the system for modern software development. Streamline issues, projects, and product road maps. Linear.

Uh and also numeral is the man. Sales tax on autopilot. Spend less than five minutes per month on sales tax compliance. Put your sales tax on autopilot. There was numeral was getting picked up by an anal. Oh yeah, we saw the news. There may be out there in the market, but Sam said too low. Too low.

You're not bullish enough on sales tax. You tell him it's on record or not. Well, it's live. He posted No, he No, he posted this. He just said too low. Okay, so we're not we're not scooping anyone here. We're not scooping anybody. Uh we