Greg Brockman traces OpenAI's journey from GPT-1 to GPT-5 and explains why software engineering is being revolutionized
Aug 7, 2025 · Full transcript · This transcript is auto-generated and may contain errors.
Featuring Greg Brockman
uh the um we'll bring him in. Greg, how you doing?
Doing great. Thank you.
Congratulations. Uh how are you feeling? How's the company feeling? Uh it's been such a wild journey. just take me through a little bit of the the like the the vibes and the company and and how you got here to today.
Well, I'm excited. The whole company's excited and honestly, I'm just so proud of the team. Like, it's just been amazing to watch people come together, not just for this launch. Uh, and you know, the funny thing is behind the scenes that people are always putting on the last minute adjustments and polish and scaling up the capacity and there's always something that goes wrong uh before launch day. There's a lot of people who uh who you know worked late into the night or really crunch to bring this release to the world. Um and you know it's a little bit like the duck uh that's you know you know under the water
but but that also describes the whole opening eye history right is that I think that we have put in many years worth of investment to the techniques used to produce this model um and really it's across just every function within OpenAI that has come together to make this a reality. Yeah, I mean, you've been there for every GPT release. Uh, how do you think about summing up each iteration in kind of like one one line? Because GPT1, GPT2, GPT3, these feel like like similar architectures, at least at least history's kind of compressed them into similar architectures. But how do you think about the progression of just the big numbered releases? Yeah, it's interesting because in some ways it's a punctuated equilibrium, but on the inside it looks very smooth, right? Even before the GPT series formally began, the first result that really sort of set this path to be something that we were heading down and that was clear that we were going to pursue it was the unsupervised sentiment neuron which was an LSTM in like 2017. So a different architecture from today's transformers and it was the first time that you could train a model to predict the next uh element. So we predicted the next character um on on Amazon reviews and we were able to get semantics out, right? Because you expect okay yeah it's going to learn where the commas go, what maybe what nouns and verbs are. But the idea that it would learn a state-of-the-art sentiment analysis classifier, that was mind-blowing.
And so I remember seeing that result in 2017 is like we have to scale this up. We have to see where it goes. And so GPD1 was like I think a good like sign sign of life of you train on on sort of all the public data you can get um and you use transformer and that you were able to get state-of-the-art on various downstream benchmarks right so you have a model it clearly learned some representation something useful about the data that it was shown and it's applicable you can use it for various tasks but we didn't really think very hard about the generation side gpt 2 was the first time that we were like all right let's actually like the samples we're getting from it, the things it actually generates, they're kind of cool. And I remember reading the uh in the GBT2 blog post, we have this unicorn uh story uh where it generates some fictional story about a herd of unicorns. And it was just so cool. It was like, wow, it like wrote a story that's actually kind of interesting. It doesn't totally make sense, but like there's something here. There's some real spark of intelligence within this model. GPD3 was the first time that we had a model that was actually something people would it was just barely above threshold for something people would want to use. And I remember working on the GPT3 API. This was our first real product. And it was actually the hardest product, the hardest project in total I've ever worked on because it just felt like maybe no one wants to use this model. We don't really know what it's useful for. Um and it certainly was the case that GBD3 is a great demo machine. you can make really awesome just like tweets and you know cool little little apps and it would give you quick answers but it didn't feel very reliable and then GPD4 was something that actually felt like it had true re real world utility. It was above some threshold. It was something that was helpful for health. was something that was helpful for uh for you know starting to be good at coding and GBD5 I think just sets a whole new standard for the reliability for the utility things like coding I think are just like clearly you know we're already on this trajectory of transforming software engineering this year I think are really on the trajectory now to be revolutionized so just really exciting to see that that whole arc
when did when did uh when did the API opportunity like really click for you because I do remember companies in that era that like quickly unlocked the power of of the API and and and grew tremendously. When did that opportunity click? Because you said initially that you you kind of had some I don't know concerns, kind of doubts how useful was it going to be and then when did the consumer opportunity click?
Well, we in 2019, end of 2019 had GPD3. We knew we needed to build a product um to be able to actually continue the mission to be able to raise capital. Um but what did we want to build? Right? we're really here because we believe in AGI that's going to have this powerful positive transformative effect on society and we want to be part of it. Um and so we thought well maybe we could build something in health and then you realize okay well we're going to sell the hospitals and we're going to maybe hire
let other people do that.
Exactly. Right. It's just like you have to go into one domain and that means giving up on the G the general right. It's like it feels like you're going to become a one particular thing but we kind of want to be supporting all industries at once. And so the idea was let's build an API and let people figure it out. But this is totally not the way you're supposed to build a startup, right? You're supposed to have a problem. No one cares about the technology behind it. Add value to that problem. Focus on just that one thing. Um and so that's why that project was so hard. And in you know January of 2020, February of 2020, I that you know I with the team were going around trying to just find anyone that would be willing to try this API. and we were driving to different offices in in San Francisco being like, "Hey, we have this cool model." And it was hard enough to get people to take the meeting, much less to sign up their company for it. Um, it was actually very fortunate. We found we found a couple of good partners. Um, and it was fortunate that that happened then because March 2020 suddenly that was COVID, we weren't driving around to people's offices to try to beg them to use this, you know, this uh this budding new technology. Um, so it was really six months worth of grind, right, of really trying to turn like when we when we started with GPD3, I remember it was, you know, that the inference code was not very well optimized. It was like, I don't know, 150 or maybe 250 milliseconds per token or something. And we just optimized optimized, got it down to like 50 milliseconds per token, which by the way, today's models run much faster than that, which is kind of amazing for me, just like seeing how how much uh fast we're able to run them with much greater intelligence. Um, and I remember setting two goals for the team. One was I actually find one customer who's willing to pay. So literally get a dollar in for this thing. Um and the second is get a use case that we use at OpenAI every day. That first one happened within the first couple months. So actually that moment I was like all right like this thing is probably going to work. Um but in order to get there we had to do a bunch of you know just scaling the API and really um you know doing doing the product work. But that second one took much longer right and that wasn't really until chat GBT. And so if you fast forward a couple years, because this was, you know, mid2020 when we when we first got that the API into the world, chat GBT, we didn't release until November of 2022. So you're talking like a decent a decent period of of two years there, a little bit longer. And I remember we were building, you know, people have talked about we were going to call it maybe chat with GPD 3.5. Um, we had a a sort of precursor product called uh called WebGPT that was built on on on 3.5 that we were literally paying contractors to use. Right? So, this was all throughout 2022. We basically had the chat GBT precursor that we had to pay people, they would not pay us, we had to pay them to use this thing. And
that's wild.
The moment for me that really clicked was actually when we finished training GPD4. So that was August 8th of uh of 2022 which actually is like three years ago now. It's actually pretty pretty wild to realize that um almost to the day
and we did the initial post train of GPD4 and honestly I had a bunch of bugs in there. It was like broken for a bunch of different reasons but
the model was like extremely creative. It was actually really interesting. It took us like about a year and a half to get to the point that the creative writing of our models matched that initial one that was buggy for various reasons. Um, and I remember, you know, we had an instruction following data set that it was post-trained on. So, it's really we had collected examples of here's a human asking for a thing. Here's what the model should do. So, it's really not trained to do multi-turn. Um, so I asked it a question, it gave a response, but then I was like, well, what if we just ask another question? And it actually was able to leverage that full context. It actually was able to have a coherent chat. And the moment that we saw that that we were like, okay, this thing is capable not just of being post-trained to do this like very specific thing, but it can generalize, right? It can kind of do the intelligent thing even though it wasn't directly trained for it. It was just so clear this was going to be the killer killer application. And so then we were planning on launching GP4 in, you know, early uh 2023. and uh we had this chat infrastructure we've been working on and it's so clear okay like we're going to have to release the infrastructure and the model and it's going to be this this amazing killer product. Um and so just almost as infrastructure ahead of getting the the real thing out you know I was excited for us to do chat GBT and that's why we did you know then and see that come to life in November. Um so I think that for me I I was really focused on GBD4 as the model. this is going to be the chat moment that's really going to work and kind of ad missed the fact because every time you see these new models you just sort of you know see only flaws in the previous ones and so miss the fact that GBZ 3.5 was something that no one had really tried before in in the broad sense of society and that it was something that was already useful and that people would respond to
was GPT3 kind of like the main pivot point for shifting the company towards LLMs because in in the prehistory of OpenAI there were a lot of other maybe expensive training runs. I I don't know how much uh I don't know how much financial risk was taken with like the the OpenAI 5 project or the robotics projects, but it feels like at a certain point the the chat became like the main financial risk vector. Um so I I guess the question is like when it feels like GPT3 was the moment when you shifted. Um, I'm also interested in hearing about uh Ben Thompson called OpenAI the the accidental consumer company and I'm wondering when that narrative set in for you like what when when did it become clear that this was going to be a really really powerful consumer application?
Yeah. Going from paying people to use your product to people saying hey we want to give you money for this.
Yeah.
Yeah. A very important transition it turns out.
Yeah. So um it's yeah it's it's a it's a great question. Um I would say that if you rewind to the beginning of OpenAI you know there's many people who thought that you know in in retrospect say that we set out to prove that scale is how you make progress in this field but it's almost the other way around. Scale was the thing that worked right that we tried a bunch of things that didn't pan out. And it really the first time we saw this concretely was in our Dota project. I I remember my collaborators Yakob and Shimone um trained the very first little agent on like 16 cores or something and left it running on their desktop uh over the weekend and we came back and it was this like very you know sort of constrained mini environment but that the model was doing something smart. was actually able to to solve this this kiting environment and that was pretty cool. And then they just they and the team just kept scaling up, right? That we had all these free cores that were just sitting idle on on uh on AWS at the time and they just kept throwing more computed it and every time they would do that, the model would just get better. And so when you look at something like that, you're like, well, you just have to see where this goes. You have to push it until it hits the wall, right? Right. And our goal with Dota was actually to develop new reinforcement learning algorithms because the common wisdom at the time was well the existing reinforcement learning PO it doesn't scale. Everyone knows that. But the question from Yakab Shimone was well why do we believe that? Has anyone actually tested it? And no one had really tested it. And so I think that that ethos of saying you have to push the existing techniques to the wall until they break. And then once they break you actually have a baseline to overcome and you win either way, right? either it just exceeds all the humans um in terms of of the the specific capability that you're trying to to to exercise um which was the case for for Dota um or it hits a wall and now you have a real problem to solve. And so I think that ethos really got embedded in our DNA um and you know at the same time I think that we were really thinking about how do we get to AGI, right? And really like Illy and I spent a lot of time thinking about that question of where's this company going and how do we actually uh how do we actually achieve it and you start to do some math in terms of you know the kind of compute that it would take to get to to AGI and you just start to realize you're going to have to build really big computers and those are extremely expensive and so I think that from the from these early foundational results and thinking we kind of realize the path that we're going to have to walk. So, it seems like there's been a few walls that we've scaled up through and then maybe hit them. Uh, there's been talk of like a pre-training wall. Now, we're uh putting tons of resources and compute towards reinforcement learning. Is there a third is there a third scaling curve that we're going to be talking about in the next few years? Are we continuing to scale up those two primary vectors? Is that too high level of an abstraction in terms of um like how we should be thinking about just progress along the the vector of scale like give me the up-to-date thinking on just the the fruits of scale.
Yeah, I'd say fundamentally deep learning I think that you know people talk about the bitter lesson. um it's almost this exploration into how do you convert compute into intelligence right through a you know we have some particular techniques to do that that we're kind of constantly fleshing out and the thing that's really amazing is if you rewind to
I don't know even the 1940s for the the makulla pits neuron which is kind of the precursor to neural nets if you look at that paper they have all these diagrams that actually look very similar to like the kinds of diagrams we draw now of multi-layer neural nets and things like that like the basic idea of what we're trying to do has not really changed in almost like 80 80 plus years which is just a wild fact right it means there's something deeply fundamental about the thing that we are pursuing and that idea itself I think kind of came from trying to model the information processing of the brain and it's imperfect and not a exact an analogy to biology and all these reasons that it should fail or that people have said this thing is doomed um but the results are undeniable at this point I mean some people try but uh it's it's hard to uh it's really hard to to kind of close your eyes and sleep on this in my mind. Um, and it's very interesting if you look at um you can find quotes from the mid1 1960s of people trying to poo poo the whole direction saying that these neural net people have no new ideas. They just want to build bigger computers and you could basically say something something very similar today. What we're what we are trying to do one moment
little water break
second. Yeah.
Exactly.
For all of us. Cheers.
Exactly. Cheers.
You know, we're all human.
A proof of humanity right there.
Exactly. Um, so what we're all trying to do is find novel ways of taking compute and really harnessing it.
And sometimes you hit a wall, but these walls tend to be ones that you can drill through, right? What we found is every time you scale up, everything, all of your engineering, all of your sort of scale and variance, all these things, they get stressed to the next level. It's almost that the tolerances become tighter and tighter. It's like launching a 10x bigger rocket means you need to be like a 100x just more precise on everything, but it doesn't mean that the fundamentals of the science are different. So pre-training, there's definitely been a lot of discussion of data wall. Doesn't mean it's fundamental, right? It just means that we need to be better and more precise at what we're doing. Um there's RL, which has been something that has kind of come from spending a small amount of compute to much larger amounts of compute now. And then there is a third way that we're really harnessing compute, which is compute at test time. And we publish some scaling laws around this. Um, and all three of these things multiply. Like that's the amazing thing.
And of course the compute and the harnessing of it is the fundamental goal, but that you get these multiplicative effects out of all of it through the quality of your engineering implementation, right? Through the quality of the data sets, through a bunch of the refining work that you do. And there's lots of different techniques and ideas. And that's what makes this field so rich and why progress is just going to continue a pace.
What about on the infrastructure side? you guys have been busy scaling up. Uh what what can you share on that front?
Um well so so I run I run a team called scaling at at OpenAI and we really focus on building the infrastructure for scaling. Uh and that this is in partnership with really everyone across the company. It's almost a misnomer that our our team is called scaling because fundamentally this this whole team and effort is about scale. Um but what we really try to do is to both on the physical infrastructure side deliver as much compute as humanly possible and that is in partnership with uh you know companies like Oracle uh SoftBank and others um that we've been able to deliver just like increasing amounts of compute to open AI. Um but we're constantly thinking about how do we just deliver more flops and do it more efficiently, earlier, cheaper, more power efficient, all of those kinds of questions. There's the software infrastructure side as well and really thinking about how do you coordinate massive numbers of GPUs in order to work across one synchronous training run. How do you coordinate that for reinforcement learning? How do you deploy that into production and bring these models to life at massive scale? And I think that every single layer of the stack there is innovation required. And that's something that's very easy to miss. Like one way I think about research is that there is um and this is kind of the view from from Jakob who who's who's now our chief scientist um that there's a research stack and you can kind of think of the top of it is um you know people running experiments and coming up with with new ideas for how to you know sort of utilize data or something like that. There's a middle of the research stack of people thinking about the how do you sort of take these different ways people are running experiments and uh be able to train in novel ways and kind of put together the pieces differently. And then there's a bottom of the research stack which is like writing CUDA kernels to get the absolute max out of the GPUs. And at every single layer here you get a multiplicative factor through innovation. So it all comes together as one big hole. um on on on on scaling. I'm interested to hear about just if we think about like the impact of AGI or the impact of AI just being some sort of maybe you know quantitative GDP metric or qualitative just impact and good um is there an important factor of scale with just not even the flops that are going into the models into the pre-training into the RL into the test time inference but actually just the flops that are going into the usage of AI within humanity broadly and I feel like that might maybe be the next like scaling curve that we're seeing as more people use models they see improvements all over the fact like is is that something that we should be tracking um to see kind of the the instead of these like scurves we want to see like the continual exponential
I think that's a great perspective right because at the end of the day I mean if you look at kind the shift from something like Dota which we pursued in order to you know we wanted to do new algorithmic development but really it almost validated how we scale up existing algorithms um but there was no illusion of delivering direct economic benefit from it right to the current models where we are still we're starting starting to end the era of like pushing on these academic benchmarks right you look at things like the IMO at this point
models are able to get gold medal on it like these the hardest academic benchmarks that are available are sort of no longer a you sort of the the guiding the the guiding light of progress for these models to where we actually want to be is for AI to be helping everyone right to be something that uplifts humanity and that's the final metric right is how much does it actually benefit everyone how much value does it bring to the world
yeah not just health bench it's actually how many people did you solve their healthcare problem right
exactly yes yes and that's the actual goal and that's what's exciting right is it's like we're moving from the lab to reality.
Yeah.
And I remember in the early days as we were thinking about how do we measure our progress towards AGI, we always sort of dreamed that one day we would be able to measure it this way. And you can think of revenue maybe as a proxy metric for value delivered to the world. Um it's not perfect, but it's at least something, right? You can think of the distribution of like how much compute goes into it, how many people are using it. Um but fundamentally like what we're after is how much do we really uplift humanity through this technology? Yeah, I mean I might be misreading it, but I'm pretty sure like that was the Kerszswwelli and Kurs Ray Kerszswe philosophy was that like total number of flops getting getting immense not necessarily all in one data center for one model. It was that it was that compute broadly would be so wide.
Yes. Yes. And I remember like on on on that chart right you can see um you know total compute of all human brains. Yeah. Which really suggests a particular vision of how the this technology will be rolled out.
Yeah. distributed. The phones count as as an impact. The Wi-Fi router counts for the impact of the internet just like the phone does. Not just it's not just the big pipe that's going the the backbone of the internet that actually matters. Um
um deep research hit product. Almost everybody I know at least in in the in the
Mark says he's reading 30 pages of deep research a day. Basically he loves it.
He's making books with it. Uh but why have uh agents broadly come around a little bit slower than than people may have expected? Is it is it is it just that uh using computers is actually a much harder uh computer use is just a really hard challenge or or you know I I think going into this year everybody said this was the year of agent
booking you talking about flight booking but you know people people were saying 2025 is the year of agents and I would say that it's the year of deep research
and and not a lot of these other sort of like broader use cases.
Sure. Well, 2025 isn't quite over yet. So, that' be my response. And I
I'm I'm very much on the uh I I think that progress in this field, the way that it tends to work is that if something kind of works with the current generation of models, it will be extremely reliable with the next generation of models. And I think that where we've been is that deep research is the if you've rewound a year, that was the like we kind of had something working. And then like this year, it's been just incredible. And I think that agents, you know, specifically like computer use agents are something we've kind of had working and again, you know, the year is not over. I think there's a lot of rapid progress to be made. Um, but I think that maybe part of it too is that the agents that we're about to see, I think are a little different from maybe what we would have pictured 5 years ago. Like I remember having a debate with some friends on do you want a agent that does the flight booking because the problem is it's actually a very high bar to beat the flight booking UI because there's so many preferences that are entailed in that right and you really have to know kind of what mood you're in like are you okay with like taking the extra layover and all these kinds of questions and um that actually there's so much other stuff that happens in your life that that is that is toil or drudgery or that's something that that you're not an expert in you're supposed to be think about health right that like every patient really is the doctor if you're coordinating across multiple specialists. There's no doctor that helps you with that, right? That that's really on you and that there you actually can have AIs that are just text only that actually able to add massive value and then frees up your time if you want to go, you know, book the flights yourself. And so I think that really finding the right problems that have high leverage, right? That really add value to people and also thinking about the other side of how to make sure these agents are responsible with the trust that you put in them, right? That the more that you give an agent access to your email, the more you really have to trust uh that it's going to, you know, sort of do right uh with whatever your your task is and send the right email to the right people and be able to se you segment your information all these kinds of questions. And so I think that there's both a practical how do you get to adoption but also just like where are the most important leverage points in a person's life.
You also missed coding agents because it's been the year of deep research but I feel like it's also been the year of coding agents. Um how is that developing at OpenAI? Uh I've noticed that I'll hit 03 Pro and it'll wind up writing a bunch of code for me and I didn't even ask it to. Then you have specific products for coding. Um, how do you see the evolution of software development uh evolve? How how are you seeing OpenAI customers use uh coding tools and how good is chat GPT or GPT5 on coding?
Well, software engineering is definitely being revolutionized in front of our eyes. It's been happening and GPD5 is the best coding model in the world right now. Um, it's the default now in cursor. uh which I think is a a really huge statement of the quality of the model and that uh it's just so good across like every function of writing code understanding codebase being able to use tons of tools being able to do agentic work um that yeah it's like I'm not a front-end developer at all
but actually now I am right and I think that you are too right if you just talk to the model you can produce incredible things and so I think that there's this real empowerment if you think about what computers were supposed to be right computers are supposed to a more productive thing. But then somehow when we started out with computers you have to contort the human to the machine right assembly language and like all these like very abnormal things for a human to do and that as we've moved to tools ultimately you know in the current generation now GPD5 suddenly the computer comes closer to you right that you just express your intent and you don't think about okay like exactly which language and what you know version of different libraries that the model is something you can delegate to and so we are very committed to programing ing and to making our models continue to be the best they possibly can be.
Must a super intelligence be able to explain how to build super intelligence?
Uh so it's it's a great question. So I mean I think that where we're going is a world and we're already seeing it where these models help us produce the next generation of models, right? They also help us really supervise tasks that are too hard for humans to supervise on our own. Right? If the model writes a 10,000line program for you, reviewing that is probably going to be quite burdensome. Uh, but if you can have a model that you trust that maybe isn't as capable as the one that wrote all that code or maybe there's a team of agents that work together to write all that code, but you have a team of reviewer agents, like this is the kind of thing that you can actually bootstrap trust. And I think that this is this is like one of the most important things. And also interestingly, 2017 is when we had the first language results. We also had some uh some results or some some vision on how you can actually bootstrap supervision beyond the scale of tasks that humans are able to supervise directly. And so I think that we're heading to a world where you know we now have these chain of thought models. We've been advocating very strongly to preserve the integrity of the chain of thought. Right? So that means don't directly optimize it to look good even though there will be lots of temptation to do it for various reasons. um really make sure that there's no pressure on the model to obfuscate its thoughts uh with within that chain of thought because then you can really see what it's up to. Um and I think there's further techniques to even make it more faithful and more rigid to what the internal monologue of the agent is. And so I think that there's actually a lot of promise in terms of interpretability, in terms of supervision, in terms of being able to scale to just like much more sophisticated tasks.
Yeah. I guess I guess my question is like there how much information in the world can be derived from first principles reasoning versus true secrets that can that need to be discovered by uh interacting with the world directly because I would it feels like it would be very difficult to um I'm just wondering about like how intellectual property interfaces with super intelligence or how like if you play this out a lot. Um how like there's all these like hard one. Doresh has talked a little bit about this with continual learning. There's all these little subtleties that maybe they're not secrets. Maybe they're not true trade secrets. You don't think to lock them down, but they're just things that haven't been codified online or anywhere. They haven't been given to anything that is surfaceable by the model. And I'm wondering how is it is it just we need to build up new knowledge in every fact from first principles and and kind of go through the the history of humanity's pursuits of knowledge or do we just need to onboard more and more information or maybe it's both. I don't know. It's just something I've been noodling on.
Yes, it's a great question. I I would say all of the above. Select all star. Um so
I I think that the answer is very similar to what it is for humans, right? How does a human generate new knowledge? How do we accomplish new things? First, you want to be grounded in the wisdom of the past, right? You really want to understand what have people tried, what worked, what didn't work. You want to go and read the biographies of, you know, various people and understand those. But you also want to try things out, right? You want to make, you know, some mistakes in a contained environment in a way that you actually can see the effect of your hypothesis. Um, and then you want to be able to learn from those. And I think that being able to really start to scale up these systems and be able to integrate them with the world is a very big process and milestone that we're currently embarking on, right? To move from a world of totally hermetically sealed reinforcement learning environments to thinking about how do you actually put real world interaction in there. And you think about things like robotics, like you're going to need to have that at some point, right? You're going to need to have some sort of interaction with the real world and to have models that are able to produce new materials, right? to be able to actually solve various diseases. Um, for them to be able to really help people, right? That, you know, we already have models that are great at use cases like therapy. Um, but to really get to the next level of something can just really help every person accomplish more and accomplish whatever their goal is. It would be very helpful for that model to actually have some real world experience with doing that very thing. And so I think that that figuring out how to bring all this together is ultimately what our mission is about. And we do this not in isolation, but really as part of a much broader community.
It seems like it's advantageous to have the most dominant consumer app in that environment. So, congratulations. Jordy, do you have a last question?
Last question. What What do you hope to see out of Washington DC in the next year, year or two, not thinking super long term in terms of, you know, basically promoting innovation within the United States. Obviously, the admin cares a lot uh about AI and has been making moves, but but what else would you like to see or where would you like them to double down? Yeah, I've been very very impressed with how much the administration has engaged with the technology and really tried to figure out how can we help and ensure that American AI continues to lead and really sets the standard for the world. And I think that that is the lens that I would really encourage thinking through right is like this technology is changing very fast and that fast plus government is not usually a ideal combination but this is the reality that we have. It's the opportunity we have and I think that the question in my mind is less about any specific regulation or strategy but it's really being calibrated. It's really having a very tight udal loop, right? Being able to react to, okay, we have a new model. These are the capabilities we see on the horizon. How do we make sure that we get the most uplift and benefit from it and thinking strategically about not just how do we do this for Americans, right, but how do we actually do this for the world and promote democratic values? And so to me, the most important thing is that motivation, right, is the question that is asked and the ultimate sort of motivation behind uh what what gets implemented.
Yeah, that makes a ton of sense. Thank you so much for joining us. Jordy, are you gonna hit the gong
for GPG5 and the whole
congratulations on the massive historic day and thank you so much for stopping by. We'll talk to you soon.
Thanks for joining.
Have a great day.
Thank you for having me.
Bye. Cheers.
Really quickly, let me tell you about figma.com. Think bigger, build faster. Figma helps design and development teams build great products together. And we are joined by Sarah Frier, the CFO of OpenAI next. And we are going to bring her in in just a minute. still swinging.
The gong's still swinging. And I'm going to tell you about Vant.com. Automate compliance, manage risk, improve trust continuously. Vanta's trust management platform takes the manual work out of your security and compliance process and replaces it with continuous automation. Whether you're pursuing your first framework or managing a complex program, uh, we need one more second. Tyler, any other questions that we should be asking for the OpenAI folks? Anything top of mind? What's on the timeline? Is the timeline still in turmoil or has it settled?
So, I I think the general vibe is like this model was not benchmaxed,
but if you actually get to use it, it's pretty solid.
Cool.
One thing, uh, it failed QPN bench.
Oh, it did.
It did not get the horse breed. Correct.
Get the horse breed. Wait, you so you have it? You have access to
Yes, I have access. But I've seen other