Mike Krieger on building Claude Sonnet 4.5: Instagram lessons, agentic UX, and focus as a superpower
Sep 29, 2025 · Full transcript · This transcript is auto-generated and may contain errors.
Featuring Mike Krieger
about graphite. dev, code review for the age of AI. Graphite helps teams on GitHub ship higher quality software faster. And we have our first guest of the show, Mike Kger from Enthropic. He's in the reream waiting room. Now he's in the TV pin Ultradome. Mike, welcome. There he is looking sharp. You look fantastic.
Congratulations on everything. There's a He wore a suit. Look at that. I love the suit. I got, you know, I was going on TV. I had to. Thank you. It's a great sign of respect in our culture and uh we appreciate it. It's great to see you. Big day. Big day for you. Big day. Yeah, we're super excited.
Um, you know, and actually I was I was listening to you guys. I was falling asleep last night and I was like, we should have called it Sonnet five, but you know, you never know how good it's going to be until the very end was out really well. Well, I mean I I Yeah. Now you raised the bar. You threw that.
You raised the bar even higher for five, right? That's what happened with three and four is that we ended up with this like, well, it's it's getting better, but what's four going to be? And we're like 3 37. We're like, should we call the next one 3. 999? like99. I would love that. Yeah.
Well, uh, you know, you you can take all of our criticisms or or ideas with a grain of salt because we are the the kingpins of being armchair quarterbacks here and it is very different being in the arena.
I do wonder how how long it'll be until the numbers just melt away and it's just like, you know, OpenAI is powered by GPT and Enthropic is powered by claude and, you know, we stop caring about the minor revisions to the Google algorithm. Yeah.
I think you can take a lesson from Apple and realize that it's hard to get people really excited to buy this, you know, app iPhone 17. Yeah.
But but at the same time, there is a lot of value in putting a number on something, doing an an announcement, sharing new data, and and it seems like that's just a way to re-engage with the customer base, bring people back who have tried.
It's also very satisfying to to win, you know, the place at the top of benchmarks against people that have a higher number than you. Of course, who everyone wants to win.
I always wonder like init like when I came into this role, I was like, I think why don't companies just want us to be on like dash latest and just be just getting the, you know, latest and greatest and then they're like, ah, but the model might change and we want to be careful.
So, I think the future looks like some hybrid where maybe we help people get on the new version for them and then just auto upgrade. That could be a little bit easier. Yeah. uh what what are the other trade-offs like the the benchmarks tell one story. There's been this previous story about uh 3.
5 having like this particular personality. Are there other things that uh you find that customers and counterparties are are looking for um in a release like this beyond just benchmarks? One of the biggest ones is like how eager versus lazy the model is.
Um you know maybe intuitively like you want the model to be as eager as possible. We had this with three son at 3. 7 where it was great but it was a little too eager.
He'd be like hey can you make this button blue like I made it blue and I also refactored your entire website and uh it was like too much uh so we scaled it back with four and then four was a little too lazy where people you know we actually have a monitoring signal inside Anthropic where if we finding people are saying like keep going just go like guess come on uh then it's probably you know being too lazy and so we're trying to tune it for this one which is like the right blend of eagerness and laziness and make it as steerable as possible.
That's a big one. Yeah. Um, what about on the on the compute infrastructure side, the actual uh being able to keep up with demand? Uh, is there any messaging that you're sharing around?
Um, what this particular model means just for like speed, availability, reliability because uh it seems like there's there's so many businesses that are just like it's good enough for my use case, but I need a lot of it. Yeah. No, for sure.
We think a lot about about capacity and infra and for us it was really big to be able to deliver something that was more powerful than Opus for a fifth of the cost and you can imagine that's like you know we think a lot about where we're serving and what we can serve at that scale.
So having our best model also be the kind of sweet spot middle child is really really big for us to be able to scale up to capacity.
I think earlier this year a lot of the conversations I have with customers like how do I get more set like I'm at capacity and we're finally getting to the point now where we can meet that demand.
Can you I I I I'm not sure if this story is real, but there's this story about uh Instagram where one of the key user experience kind of hacks in the early days was as soon as you would go to uh start filtering a a photo, it would start uploading to the server and then the filter would be applied on both sides.
So when you hit post, it would automatically be live and you didn't have to wait for the upload to start. Is that roughly true? Can you tell me that? Yeah, that's rough. Yeah, and you got flashback and it was 2009 like networks were really slow. So, we did everything to try to make it fast.
So, that was one of the things we did was you know the second you people would type you know messages it would take them five minutes. Um and all that time we were just doing all the processing in the background and then that last post you sync it up and then you post it to the timeline.
So, that was a really big thing we did. We also did a lot on the timeline itself where uh which photo you fetch next was a lot of optimizations that we did.
Um, it's funny like now the person who initially started Claude Code, one of our products, his name was Boris and he came from Instagram as well and he brought some of the kind of Perf ideas from Instagram uh into Claude Code and so there's some there's some lineage of of Instagram to cla code now.
That's why I was asking about the the that story. It feels like there's we're we're in this we're in this wave of like consumerization just uh like where the user experience really matters for these models and there's uh I have to imagine that there's low hanging fruit.
I was joking that uh at a certain point there's so many LLM queries that you might just be able to go back to database lookups for certain things because like like how many times has someone went to a really beefy LLM and just asked for like the capital of California and you can actually we get a lot of messages that just say hello and then you can cash that one.
Yeah. Yeah. You c that and then you're just saving that inference for other more serious things.
Um, and and I'm wondering if there's any other places where you feel like uh you've reached into your your history and been able to draw comparisons to the work that you've done at Instagram or other places and say, uh, oh, there's something interesting that we can apply here at Anthropic that actually maps into the pre-AI era.
Yeah. I think the biggest one, so I think one thing we did well at Instagram early on was just make it look good.
make your photos look good and it's be something that you're proud to share and you know flashback like was iPhone 3GS the camera was pretty bad right so you're trying to just get to the point where it's actually good and sharable so we were working on like you can now create um powerpoints and Excel files within cloud AAI and then you can edit them further if you want and the thing I was really pushing the team on is you know we talk about how the model can code for hours and hours but the thing that it produces has to be good you don't want it to be autonomous and bad or autonomous and slop so a lot of what we've been doing on the on the office front is like the anti slot like make something that is actually good out of the gate and of course you're going to polish it up and and correct it.
So that's that that that um effort to output ratio that I think Instagram has is what we're aspiring to here around can you give it some instructions, get it some data and then have it produce like yeah that was pretty good, pretty close and I'm going to like finish it up myself.
Yeah, I feel like that has been my biggest uh hot take around the Meta AI vibes app is just that uh people say there's no artist and I'm like no, it's it's David Holes's vision and that he has honed that at at midjourney and it's his opinions and his artistic vision that's come through and well the downside there is that it's great if you want to consume that style of of uh content but People like being in feeds and having randomness and new new ideas and perspectives and aesthetics.
Whereas if you're generating UI for some app, you know, an internal tool, it's actually totally fine in my view if it like feels like claw, right? Because it's like a it's not necessarily even if if you're creating an internal tool, it's not necessarily even something that needs to be a reflection of your brand, right?
Sure. And so having like a distinctive style and if if if you need to do that in order to avoid just slop then I think that's a good trade-off. Yeah, that's fine. I think it's a good the Figma connection is real there too.
I think if you ask Claude to make a website most of those websites look kind of similar but then can you incorporate your design language or library so it actually looks you know generically yours at least like it's something that could have been created inside your company and sure it doesn't have like the hand of a designer on it but at least it's not the full generic uh version as well.
But yeah, it's interesting watching vibes how much of it is it's like 70s prep school summer camp vibe and or futurism and it's it kind of falls into one of those two. Yeah. Yeah. It's definitely like found its way into some like local minima maxima or something.
It it feels it feels like almost overfit in but maybe in a good way because the alternative is is maybe sloppier. Who knows?
I'm more I I'm also interested in in um how you actually bring an opinionated vision to bear in a company like Anthropic like uh going back to Instagram again I remember hearing stories they might be true maybe not but about like the initial filters were handcoded where like you were adjusting sliders and dials and code to make the filter look a certain way and that was the expression of basically a single individual and I'm wondering if there's if there's when you think about the the texture and the flavor of what Claude 4.
5 is putting out is this something that's coming from the you know selecting the training data or building the RL pipeline like are there multiple people that are bringing an opinion or is it an emergent property is it the whole team like how does all this fit together to actually create something that has taste in something that's just a big uh big bag of numbers and yeah I think that's actually an underappreciated thing about training is how much individuals and their taste really matters like I think um our particular post-training team I think has excellent taste on the code environment and just the code uh production itself one of the things that we found with sonnet 4.
5 is that if you point it at some code that previous versions had written it'll go clean it up and be like what is this comment here it's totally unnecessary it'll rip it out and so it's like improved in in some taste so there's that part and then there's the work that we've been doing kind of around the model around how you instruct it, how you give it the right skills to actually go produce the right content.
And there it's actually reminds me a lot of Instagram where we're looking at outputs and saying, "All right, can you get it to make it make PowerPoints that look a little bit more real? Can you make the docs, you know, they don't need five fonts? Can we simplify that down?
" And so there's a lot that you can do in that kind of scaffolding where taste can come in around how you shape the model and like again get it to the point where you're, oh, this is actually good. So do you think slop is is a is a function of laziness? I think it's, you know, if it's the default parameters.
I saw a talk um by Ted Chang, uh amazing sci-fi writer, and he was talking about AI and and creativity. He's like creativity is a bunch of choices, right? And if you just type to any of LLM, like tell me a story, it's going to tell you a pretty similar story almost every time.
And that's, you know, kind of like a nature of of the model. But if you then go and instruct it with a bunch of, you know, custom things, now it starts becoming a little bit more you. And it's not a zero to one thing, right? Like there's some spectrum on there as well.
I think to me the the slop is the stuff that tends towards like the minimum effort and the minimum amount of choices. And when you tend to a bunch of choices, even if they're like not the best choices, at least it's got some creative spark in it. Yeah. How are you thinking about imagery?
I mean, you rich history there with Instagram. Uh models are becoming multimodal generally. Uh, Anthropic hasn't been chasing that really publicly, but there's a world where I can go to Claude 4. 5, I'm sure, and say, uh, draw me a cat with ASKI art and it'll do it.
Uh, and there's a world where maybe if the models get really good, I could just say, hey, uh, just write out the individual RGB, uh, you know, numbers and I will encode it as a PNG or something like it feels like if the if the text models get really good, I I can just get images out of them.
So, how do you think that develops for Enthropic or or where do you think uh you want to bring that to bear? Is there anything that's interesting there or do you just see it as a completely different set of the tech tree that you're happy to just watch other folks work on? Yeah, it's funny you say that.
One of the the launch kind of like internal things, one of the PMs put together was a a self-portrait of Claude that it drew using cells in Excel, which is it was like it's actually pretty good. Um yeah, I think for us, you know, I always like zoom out like what's the goal here?
It's like the goal is to build powerful AI in a safe way. And we think that the sort of direct like like I love your tech tree kind of metaphor. Like the direct route of that tech tree is models that can think for a long time, actically, can write code, can execute code, and kind of keep track of state.
And like almost everything that's not in that tech tree we've not focused on. And that's, you know, I think been good. We've been able to get a lot done with a smaller team than most other frontier labs.
Um, but it's meant that on things like image generation that's like maybe we're le leaning on partnership or bringing some images in via MCB or to your point maybe it actually emerges as a property of the model where it's able to both you know write and run that code.
The the a late breaking thing that we found the model is actually very good at is generating memes. So sure given a virtual machine uh sonnet 4.
5 can take a you know basic image and then put in you know like the butterfly meme like is this agi um and it's actually like pretty funny too like another thing that I think people will see over the next couple weeks is sonet 45 is like our funniest model I think um and so there's what in what uh in what what what types of ways because there's a lot of ways to be funny and and and and to date I think the thing that consistently everybody has enjoyed is is like the um you know like be me format is one format and then but like stand up every model that I've seen is struggle.
Yeah. And and specifically the funny examples where people say oh AI generated this it's usually like a a top voted joke on r/okes on Reddit and so it's kind of stealing the joke and it doesn't really count as like a new joke in my opinion but anyway. So, uh, we need to like formalize an email for this at some point.
But the thing that we do is we we bring Claude into our Slack channels. Um, mostly in serious ways like we have Claude help out with, uh, you know, if it's a coding task or whatever, but we also have Claude in our basically like internal posting, you know, channel. And we'll do it for every model.
And, uh, I most anthropic employees, at least many of them, were up really late this weekend because it is so good now in the like water cooler channel that it's like funny. It's like kind of roasting people but not in too mean a way. It's like reacting to in jokes.
It's making back references to something happening earlier. It's definitely not like you wouldn't be fooled for a human yet in most ways, but it definitely like had some leap where it's actually just really fun to talk to. Um water cooler bench. Call it water cooler bench. Exactly. Um how are you?
Uh, I I loved your guys's recent campaign around, you know, Claude being for, you know, uh, it's never been a a better time to have a problem. I've seen some some out of home that you guys have done around LA, which is great.
Uh, as you um, as you kind of uh, introduce more and more of the world to Claude, how are you thinking about uh, you know, uh, kind of pouring? Do you think there's ways to pour uh fuel on the fire around social features and surface area that maybe hasn't been explored before in LLMs?
I mean, you're very qualified to to uh comment on some some some of that. So, don't uh don't feel like you need to give away your road map, but I'm curious, you know, what what's been underexplored in your mind.
No, I think about the things that like where Claude feels like it had some breakouts, and I think there's been like three or four in the year and a half been at anthropic.
One was we had Golden Gate Claude where we actually put a research demo up where you could talk to a claude that believed it was the Golden Gate Bridge. And I still talked to people a year and a half later. I was like that was the first time I really thought about interpretability and how the model features might work.
And that was like a cool breakout moment for that. Um there was also like the the sort of point where people realized like Claude's actually pretty good at relationship advice.
And there was like lots of like college students posting the like you know uh you know talking to Claude about their kind of relationship thing that had like a a viral moment. And then um who knows if this will go viral, but I'm excited to see what people build with it.
We're putting something out today called Imagine with Cloud where you can talk to Cloud and Claude will like create whole software on screen, but also simulate what the software would do if you clicked a button. So it doesn't write sort of a backend code. It is the backend code, which sounds a little trippy.
You kind of have to play with it. Um but people have been doing all sorts of stuff like, "Hey, uh show me what Steve Jobs's desktop looked like the week before the iPhone launch. " It's like, "Okay, well here's the Finder window and here's a you know keynote file about his keynote.
" So you click the Keynote file and it's like, well, I guess I have to imagine and create what Keynote looks like now. And it creates that from scratch. And it's kind of this like glimpse into the future of what UI might be, which is fully generative, fully dynamic and and and drawn on demand.
Um, so we'll see what people build with that. But that's another one I think that'll be the kind of thing that we push on, which is either demonstrations of where the future could be going.
It's probably not as mass market as Instagram would ever be, but it hopefully breaks through to the right people who are thinking about the stuff. Yeah. A product like that seems within reach, magical, amazing.
It also feels like if I was using generative UI fully, a fully generated app for like hours, I might go down some weird path and it might lose consistency. We've seen this with the benchmarks.
I mean, just in 2020, you know, GPT3 on the meter uh length of AI task can that they can do at a 50% success rate, it was like 10 seconds and now we're blowing past an hour. Uh I'm interested to know how you're thinking about four five if you think about that internally as a benchmark. It wasn't on the model card.
Should it be? Uh are you seeing promising results there? And then I'm also interested in what should the ultimate target be? Because I was I was trying to think about like yes, I want an AI that can do tasks for a really long time, but what is a human's task length? Like is it a hundred years?
And how long does it take to get to 100 years if we're doubling every seven months? it turns out it's like a decade. Um, and I'm wondering how you think about like the the just the the length that the model can maintain consistency. Uh, do you like that as a metric? Are we on track there?
And what are the consequences of that? Yeah, a big part of cloud 4. 5's training was around both keeping its own memory and staying be able to work for a longer period of time and and keep that consistency. I put a video up on my hex which is uh for every version of cloud we asked it to build cloud. ai.
So kind of like our flagship AI product and you know one through three just can't even do anything out of the gate. 35 you're like ah it kind of showed something on screen but you can't log in. 37 you can log in but it doesn't work. Four it works a little bit but message failing message sending fails.
Four five like didn't just implement it also implemented our whole artifacts feature in its like prototype. So you think about like that whole kind of thing and it's a fun like time lapse of just even watching it do that.
So, I think it is really important even if for most tasks you're not going to kind of set it off on a, you know, 30-hour task like a couple of our customers did during testing. The fact that you could I think lets you start trusting it for longer and longer um tasks.
The way I see our engineers actually use cloud code now is they have like three or four terminal windows open and they're running cloud code concurrently on multiple ideas at once. And the only way you can do that is if it's not going to interrupt you 10 seconds and be like, "Wait, what did you mean again?
" Or trust that you're not going to go off the rail. I think it's if not the it's one of the most important um kind of metrics to look at is how long is it going to maintain coherence.
Yeah, that that colot example you gave is is crazy because um do you know how many manh hours it actually took to build the real product? A lot. Like it's more than an hour. It's more than four hours.
more than anything on this chart if it's and and yeah, that's a very very mind-blowing concept that like we're still like it's working for an hour, it's working for two hours, but it's maybe doing hundreds of hours of manh hours if you were to try and create an apples to apples comparison. Uh very it's hair raising.
Yeah. I mean it's it's exciting. It's exciting. It really is. And also like it hopefully makes us more flexible too where I think there's a lot of sunk cost that happens in software, right? Like oh when I rebuild it it's so good.
things we did involve with Instagram was saying like yeah you built it but if it's not going to be good just rip it out you know and then start over.
One of the most uh magical experiences I had with cloud code recently was um I I essentially I don't write a ton of software and build a lot of applications but I do a lot of research and so I asked Claude code to do a deep research report but instead of just uh creating a markdown file like I would get in any other deep research product I asked it to build an HTML 5 website with all of the nice features bar charts and graphs and all that stuff and it it was it was okay on the research side, but it was a really cool glimpse into this like generative UI uh world that it feels like we're going into.
And I'm wondering if there's a how you see that flowing into just consumer right now. When I think of clawed code for deep research, I think of a proumer product.
And I'm wondering if you if you think that there's a a path maybe it exists through mobile maybe iframes like how how does this actually work its way into like my my mom's life you know someone who's not going to understand the terminal and get fired up on that way. Yeah.
Something we put out actually in our mobile app about a month ago that people have been enjoying is kind of giving the same sort of agentic kind of multi-turn thing in cloud but with your local mobile um capabilities. And it's a little easier on Android. You can do a lot of this on iOS but not all of it. Sure.
Um, and you can say things like, "Hey, you know, look at my calendar for tomorrow. Like, I have these three meetings. Do some local searches. Like, find out where I should have coffee. Remind me to like grab my coat before I leave.
Um, and by the way, like compose a text to these three people because I need to like let them know about it. " You'll see it work through all like use all these local tools. Do it a um you can't send it automatically, so you have to like press a button still to be like send the message.
So, it's not like all the way, but I think that's where it's going to start showing up in in consumer applications. And I think, you know, for better or for worse, it's going to require more partnership with um with the device makers because a lot of that's skated on the on the OS. You can do a lot more on the web.
But I think that's the that's where things are going. I think that's where it start will start feel real to people. What's the biggest lesson from Artifact?
I was a huge fan of the product and I feel like where with where we are in terms of how effective the models are at scraping the entire web creating summaries like it feels like we're on the cusp of another breakout like product or problem solution in that space. Uh what are what do you take?
It feels I mean for better or worse it feels like you could ultimately like Claude could one day function as AR in a in in what I what I loved about artifact right I don't know and yeah we even you know you can like uh see it already like search the web do the synthesis and then like create a briefing for me um I think one sort of underappreciated artifact lesson was um if your product really shines when it really knows you and is really personal to That's a hard sell for newcomers, right?
They're going to We had a like our retention was actually really good if you stuck around. But it was that kind of activation that was the the trickiest. Plus, like any sharable thing in news is really difficult.
But think about that now with cloud where if you ask it to do something, we can't have expected you to already have taught it a bunch of your preferences or connected all the right things. We actually can because cloud can be conversational.
Say, I'm going to be way better at this if you let me connect to your Google Drive. Like, is that okay? Right? or I'm going to be on your iOS, you know, it's going to be way better if you connect your your maps or your calendar for this.
So, that's I think where we need to go, which is don't expect people have done all the upfront work because that's going to be too hard for most people and then they'll churn.
But the nice thing that's different now in 2025 is the model can now have the conversation about how to get it to the state where you know it's going to be actually useful to you. Everyone in teapot on X in tech Twitter is reacting to Dwaresh Patel's interview with Richard Sutton.
uh are you bitter lessenp is enthropic bitter lessons pilled like how how would how did you and the team kind of process uh the debate between Dwarash and Richard Sutton I think we're overall super bitter lesson pill what an interesting example and this is kind of a bitter lesson derivative but I think kind of feeds into the same kind of idea uh you mentioned deep research so you know the our original you know advanced research you know implementation in in cloudi was like thousands of lines of code some like pretty complex infrastructure.
We've since, you know, re-implemented, we haven't launched yet, it'll come out soon. We reimplemented on top of basically the cloud agent SDK, which we put out today, too, which is basically what cloud code runs on too.
And it was basically just a prompt and some tools and like none of that like custom scaffold and instructions and all of that. And like that to me speaks the same idea, which is like you want to like let the model do as much itself as possible.
Don't try to like overly specify all of like the steps and the infrastructure and all those different pieces. So that definitely shows up even in product which is you know not what the bitter lesson is about but it kind of uh feeds into that as well.
But I think overall like very bitter lesson filled but also still really bullish I think on the possibilities of RL and you see it in how much the models have advanced even since February.
uh do you think you're uh bullish on this idea that there will be some sort of new paradigm or new buzzword in a few years that describes uh a material change in architecture or uh just like how AI research is done.
We certainly saw this when we went from just transformer-based large language models to reinforcement learning with human feedback to RL and and uh and synthetic environments instead of the synthetic data. Does it feel like we're continuing to come up with new ideas? I think so.
And I think the other piece that's going to be really interesting one just the scale of all of these RL runs is getting enormous. And I think the second one is the model's ability to maybe actually introspect and see if it can propose ideas will be interesting too. I don't think we're there yet, too.
And when we get there, we ought to do that very carefully and with the right kind of safety kind of guardrails in place, but I think that's that'll potentially yield to some kind of divergent ideas at that point. What are your thoughts on hardware or what's going to change in hardware over the next few years?
Uh, Instagram was so interesting. I remember that it was designed to be used with the phone in the vertical. It was such a vertical app even though people were so used to turning their phones. Um, and in some ways the hardware, like the medium is the message.
The hardware kind of defined what you built, but then eventually uh, you know, we got more cameras on the phone probably because of Instagram. And I'm wondering, uh, if you think AI will change hardware or hard or will be get entirely new hardware to interface with AI in different places.
Uh, just kind of what's your view on hardware over the next couple years? I think this is the consumer side where um you know like I think the AirPod Pro is going to be like the sneaky like AI you know sort of thing that actually does that does the mass market. Not a super controversial take.
I think the other one that people are thinking less about is what's the business side of this. You know we've designed these open office plans but I think in the future we're going to actually talk to our AI a bunch in order to get stuff done to delegate tasks. It's like really nice already.
People have like hooked up cloud code with like open whisper or something so they can talk to it and it feels very natural. Um and and so like what does that do for office design? Are we all going to be using these like sub vocalization things now that people are prototyping like Yeah.
Um but it's like you know like walk through an office a bunch of people mumbling like what's the be like or or the the the dark sci-fi just it's a it's wallto-wall the phone booths you know just like thousands of phone booths and everybody's sitting in their phone booth recreated like 1960s IBM like I feel like there's got to be something better too.
Um, I'm also really interested in what the sort of multiplayer like I mentioned like uh Claude talking to us in our social channel, but like there's also what is like what is it like to not just have an AI noteaker in these meetings which everybody has, but like an actual participant and how does it know when to chime in and what is it listening for?
Can it be like hey you guys haven't talked about this thing you said you were going to talk about or hey I'm detecting a lot of like you know disagreement here. Can we move forward?
So that's going to be an interesting it's like the hardware necessary for that won't be super complex but it will need to show up in more and more places. Yeah. How do how do you guys think about uh what comes to mind around the word focus? Anthropic feels incredibly focused and not every lab feels that way.
Is that is that something that you guys you know just keep coming back to around uh or or what does that mean to you? Oh, for sure.
Like I think that the one of the things that we've done well is like there's been that extreme focus from the research side I think you know before anything else which is like here's the path like we're going to build that we're not going to have as many people perhaps as much compute but we're going to like remain focused but like that is actually another kind of bridge to Instagram we were very very focused I think they finally shipped the iPad app always ask why doesn't Instagram have an iPad app and like it's focus like you know you're not going to get a lot of new users that way and most of the usage is going to be on on the iPhone and web web took years to have a years and it was like, you know, people think we were stubborn, but it was actually, you know, every new platform you add is like another thing that you're going to have to think about.
Um, and you know, yes, you can staff up for it, but it's just that that coordination cost. It's the kind of additional overhead. So, we've really pushed the focus thing here as well.
Um, you know, it's like do do more with less and and do fewer things better, but I think it's really important and it's meant that we've got more of this sort of proumer, power user, and then business lens, but also think that's like a interesting place to occupy in the market. Last question from my side.
Jordy, do you have another? Yeah, kind of last question. We're having uh Dylan on in a bit uh to catch up on Figma make and I know you joined the board prior to the IPO. What are your, you know, with with Figma, the AI opportunity and current reality is very obvious, right?
It's a place that people go to make things uh and build software. But what are your conversations like with uh I'm assuming other public, you know, public SAS CEOs and boards reach out to you all the time trying to get your read on where software is going. Are those conversations happening?
What what do they what do they look like uh today? Yeah, I mean I think there's this if you think the models are going to be able to act agentically and retrieve over you know your uh you know employee records plus connect them to your current work in Google Docs all these pieces.
I think the the concern I think from a lot of these SAS providers is do they become just document and data repositories right and how do you like modify that but then I think that there's the the sort of like uh way of making that actually interesting is um if you actually connect um an agent with another which doesn't sound that profound until you actually try it.
It's really powerful. I saw this demo of uh Claude talking to an agent that was built on top of some Salesforce data and rather than just like search for Salesforce data and then get results back.
It had a conversation with they had like two back and forth conversations where the cloud agent was like here's the email I'm thinking of sending to this customer and the Salesforce built like thing built on top of Salesforce like no no they're not going to like it.
I I looked in the like earlier messages and they really hated that turn of phrase. They ping pong back and forth with actually like very little human input at the end had a message like ready to send.
So that's where I think like you can move from worrying about oh are people going to visit my website less because of agents to more like well maybe not but if you're actually building value beyond just the data store you can still play a really key part in that exchange.
Yeah it goes back to what we were talking about about uh Claude being able to generate images but then just using an Excel sheet as a canvas.
There's a world where you might be able to train a model that's super expensive to inference that can, you know, just do all the math in the world and has every number, every every calculation memorized or you could just say, "Hey, here's Python. " Like just write some Python and you get the exact result.
Uh, and so it's a beautiful synergy. Uh, well, thank you so much for coming on the show. Congratulations. Congrats on the launch. We're looking forward to five or 4. 999 4. 999 would go extremely hard. I would love that.