Pat Gelsinger on AI's energy crisis, the 'siliconomy,' and benchmarking AI for human flourishing
Oct 2, 2025 · Full transcript · This transcript is auto-generated and may contain errors.
Featuring Pat Gelsinger
I think we should bring in our next guest man that should need no introduction, Pat Gellzinger. He's in the reream waiting room. We will bring Pat into the TVP Ultradom. Welcome to the show, Pat. Great to meet you. Oh, so good to be able to join. Thank you. Thank you so much. So excited.
Uh I would love a little bit to catch everyone up on what you're doing today. Take us through the the the current project, the current business, and what you've been focusing on over the last few years. Yeah, thanks John. Thanks, Jordy. So uh uh since uh Intel, I've been focusing you know, I've been wearing two hats.
Uh one is the deep tech investing hat. uh with playgrounds. So the hard stuff, you know, quantum computer, superconducting, next generation light sources, new materials, you know, those types of things. And then my other hat is, uh, being the head of technology for Glue, uh, a faith tech company.
I've lived my life at the intersection of faith and technology. And, uh, Glue is building the platform for the faith and flourishing ecosystem. you know, from running it all the way up through the latest and greatest uh trained models for AI services, you know, for the church and the faith uh ecosystem.
Yeah, I'm fascinated by Glue. Um, we've talked to the founder of Hollow on the show and it feels like an industry that is uh super ready to adopt technology and yet there just aren't that many founders that are pursuing it.
uh would love to talk more about some of the findings that you put out when you investigated the foundation models. Uh how did you process that information? What was the uh methodology for evaluating these models? And then I'm sure we can dig into more about uh the consequences of those findings. Yeah.
You know, and maybe the first point is, you know, I've been driving benchmarks. Yeah. For 40 years. Yeah. Right. You know, my name you know, right? In fact, some of the benchmarks that are still used today have my code in them. Yeah, I was sort of like sort of embarrassing like you could have moved. It's amazing.
Yeah, you should have moved on by now. But anyway, some of my code is still there. So, I've been deeply associated with okay, if you can't measure it, you can't manage it. So, you know, this idea of measuring AI models and you know, people are working on this today.
Uh but you know we're also seeing that uh they're just measuring performance or cost per first you know time the first token cost per token right you know total token throughput those are great metrics but it doesn't measure is it good right and to be you know the absence of bad does not demonstrate the presence of good so what we did right so the second point is you know there's this body of work that was fairly recently released uh by Harvard Baylor and Gallup measuring human flourishing a five-year Lily uh uh uh study that uh across 22 nations across 50 plus cultural groups what is human flourishing.
So now we can answer fairly definitively across multiple dimensions across you know relationships, character development, you know finances, spirituality and faith is it good?
So now we have you know a solid view for that and then the final piece is okay let's create benchmarks for AI systems using that foundational research and now we have you know a corpus of qualitative and quantitative results to measure good.
So if you say which is the better model for human flourishing open AI you know Gemini anthropic you know claude I can give you an answer based on a solid corpus of data with the most rigorous of you know benchmarking experiences and numerical analysis sitting behind it.
And we believe and our objective is is the result of that is we will make models better because they can start measuring not just first token time but did I give a good answer that's aligned with human values. How do you think about that benchmark of human flourishing?
Like like when you go through your life, are you thinking about your finances, character, happiness, relationships, meaning, faith, health?
Is are are you consciously checking in with yourself or is this something that's more reflective on your life where it all kind of rolls up into happiness or well-being and then when you investigate your life, you realize that those are the things that you've done. How intentional have you been? I guess. Yeah.
You know, I'd say it's sort of in two different uh you know, dimensions there. I mean, hey, I don't go benchmarking myself all the time. But, you know, that said, you know, every time a major new model comes out, I want to ask the question, is it better? Yeah.
And on these dimensions, so you know, we'll be producing these benchmarks on a periodic basis every couple of weeks.
you know we'll be uh updating those uh benchmarks to say okay are we better you know which model is better is deepseek better than open AI than oss and you so want I want that rigor to be in place that everybody is you know okay you know are we improving in this area because you don't fix these things overnight you know it's a long trajectory of you know uh evolution and development of the technologies you Secondly, oh yeah, you know, and then the second side is okay, you know, you'll hear disastrous situations, you know, where we have kids committing suicide, you know, their best friend has become the chatbot.
Wow. When those things come up, would our benchmarks be detecting those?
Can we you know and then do we have to add more and make them more rigorous that those kind of things can now be avoided you know in a constructed way you know that we really are happy that we are making technology better or the language I like to use you know technology as a force for good technology itself is neutral are we going to bend that arc toward good how receptive have the labs been to to the benchmark you know I'll say it's early uh at this point and you know so you know we're interacting with all the names that you would like um and I'll say you know they're getting better you know their scores have improved somewhat you know so I do think we're having some influence uh with them but it's still early and you know as I would like to you know also say I don't think the benchmarks are rigorous enough yet you know we got a you know we have sort you know an early version of them out there these are good you know 1230 questions or so are part of them as well.
But I expect that becomes a bigger corpus, right, of questions. You know, also we got to go deeper. You know, for the most part, these are singleshot, you know, questions. We got to get into multi-turn conversations.
We have to, you know, have more of a mixture of experts, you know, that you're testing different experts. Also, how do we test the safeguards, right? You know, we're now at the sixth turn of a conversation. You know, this individual was clearly suicidal. you know, how do we go handle that? Yeah.
I think a lot of the safety concerns that at least we have is when people are, you know, many many many many prompts deep, right, in in a in a specific uh conversation. It could be thousands, right? Depths to which that personally I've never gone, right?
I don't know how crazy it gets that far, but certainly the labs need to be thinking about all those extreme edge cases. Yeah. Totally. Yeah.
So I think we have more work to do you know on I'll say multi-turn conversations and how we build more of that robustness uh into the benchmarks but you know what we have today and you can go to the glue website FAI right you know and see what we have today you know this is a good start but you know I do think in the speed of evolution of the AI technologies we have a lot more work to do and I'm sort of excited now as we get some of that momentum underway you know really start crawling into you Just like you said, more of the multi-turn conversations and we can be pretty aggressive in the questioning sets for multi-turn to drive some of those extreme behaviors, you know, because, you know, the first few questions is mostly driven by prompts, right?
You know, as you get deeper into it's much more reflecting what's the underlying model weights, you know, themselves. So, all of that we have more work to do, but I'm excited to have this first version out there and we're starting to get some impact and good results.
Yeah, there also seems to be some sort of unresolved question around alignment by default. How much do the models just reflect you? If you show up with a lot of negative energy, they can kind of become negative and it feels like it's this self-reinforcing spiral.
If you come to it just asking interesting questions about the world and wanting to understand how math works, like you're gonna get a tutor if you come to it with something bad. So, a lot to a lot to sort out there. I have a question.
Um, uh, one of my favorite quotes from you is from this talk you gave where you sort of lightly accuse Silicon Valley to be a group of miserly pagans. Uh, it's a great quote. I think it's a very apt analysis. Uh and and when I look at the performance scorecard on glue.
com/flourishinghubress research, uh I I notice that the models seem to be doing very well at finances and very poor at faith. And I'm wondering if that's a reflection of the the humans that train the model or the data set that's going into the model or both or how how intractable is that problem? Yeah.
You know, and I think that's great because, you know, I mean, where's a lot of the training data coming from? Yeah. Right. Associated with that, you know, and I view it as, hey, you put bad stuff in, you're going to get bad stuff out. You know, you put good stuff in, you're going to get good stuff out. Yep.
Uh, you know, for it. And, you know, I mean, we're doing some work. You know, we uh we glue uh you know, we'll have our own values aligned models, right? And part of the reason we're going to do our own foundational training, you know, for those models is specifically I'm not going to put the bad stuff in.
Yeah, that's good. I don't have to worry about it ever coming out because it was never in, you know, so that we can never probabilistically producing those things.
So, you know, but you know, getting more specifically to your question, I think a lot of it does have to do with uh the uh implications, right, of those training sets in the first place. Mhm.
You know, and every, you know, you've seen lots of uh conversations uh about the uh challenges of we don't have enough training data, you know, we don't have good training data, you know, the quality of synthesized data, you know, versus uh raw data, you know, associated with it.
So I do think there's a valid problem you know and you know I'm always a bit optimistic on the uh individuals uh that their motives are good uh for it you know but we've clearly seen in the social networks you know the algorithms were bad right you know I mean it leads to more derange behavior right you know it's only about dwell time on sites you know and I think those same motivations would you know naturally drive the AI models if there wasn't further accountability.
Yeah. And that's what leads us to saying we have to have better benchmarks that are rigorous and values-based that get the answers we want. Well, yeah. The the even with with human driven social media and and how it can take people down these extreme rabbit holes.
The they're they're not as worrisome as LLMs because LLMs can be totally private. they can go to these kind of dark places and it's just one human interacting with the model. Whereas social media, at least if somebody's on some crazy forum, there is a point at which they can say something that goes too far.
People can say, "Hey, I think you should, you know, go out and touch grass, right? " And uh I don't I haven't seen a model tell somebody to touch grass yet. Maybe that's built in. Maybe that'll come in the next version somewhere.
Did you did you always predict you know it feels over the last few years Silicon Valley has a you know massively renewed interest in faith and religion. How how much did you anticipate that? Clearly you know you never cared what the industry thought you had your own faith but is that something that you predicted?
Did it come faster or sooner?
Yeah may maybe three different dimensions you know there you know I mean one is there is I'll call it a postcoavid and a post success renewal right where you know this is very endemic of Silicon Valley but you see it in lots of other you know communities New York Austin as well you know people presume success and now they're asking deeper questions like is it significant So you know there is this national right you know shift in what's important some of that might have been driven and some suggest it was driven by covid isolation uh you know some of it is driven by the success success phenomena you know that we're in you know but it isn't just Silicon Valley you know there's a number of statistics you know those that you know believe in a faith view of Jesus Christ the church uh engagement or spiritual engagement you know, we're seeing very much, you know, 20 to 35 year olds, you know, is going up in very meaningful ways.
You know, four data points in a row of rising uh uh engagement or spiritual value engagement. So, it's a national phenomena uh at this point. Clearly in Silicon Valley, we've seen a greater willingness to talk about faith. You know, I started an organization about 12 years ago called Transforming the Bay with Christ.
uh one of one of my other hobbies uh and that organization now is uh over 800 churches strong right you know when I talk to people they didn't believe we had eight churches in the Bay Area I have over 800 right you know participating in transforming the bay and I'll just say there is a renewal a connectivity you know associated with that you've seen other things uh recently you know some news and uh uh commentary about uh uh Act 17.
Yeah. Trey and Michelle Stevens and you know, Peter Teal and Gary Tan, you know, friends friends of mine uh that uh Huh. All of a sudden, you know, I mean, we're hosting these events that are sold out events, right? You know, where hundreds of people are showing up, you know, asking life questions.
You know, some of it's, hey, I want to come and hear Pat speak. I want to come and hear Peter Teal speak. But some of it is they want to have life questions from someone they respect.
So I'll just say, you know, there is this renewal uh going on, you know, and personally, I've been at it so long people often say, well, do you feel more comfortable? Well, you know, I I felt comfortably uncomfortable 30 years ago to speak about my faith in the Bay Area.
you know, 30 years later, well, I've been doing it so long, everybody expects me to do it, so it's not as uncomfortable, you know, but in many regards, you know, it's like it's who I am, right? And if you have a different faith or a different world view, you know, everybody should be comfortable talking about that.
And you I'd say from my faith perspective, it's uh, you know, like I spoke to a Bahigh womanly recently. I don't know anything about Bahigh. Tell me about Bahigh. Right?
And the more I engage you where your world view is, the more freedom I have to talk about my Christian worldview and why I think that's good, powerful, and meaningful as well. What do you make of uh it? It feels like an easy prediction to make that there will be religions built around AI.
Feels like they're already emerging. Uh if you go into the Reddit uh forum on chat GPT, you have people that you know worship 40. It feels uh technology circles. People kind of joke, oh, we're building AI god or god in a box. It's going to be so power. It's super intelligence. It'll be above humans in some way.
And it's a little bit tongue-in-cheek with some people, but some people seem to believe it. And I and I think there's a long history of Silicon Valley and and and even parts of the Bay Area trying to rec recreate religion from first principles.
I have, you know, my uh family has been in, you know, PaloAlto in some way or another since the early 1900s. And I've heard stories of like cults emerging around Stanford in the '60s where people were basically creating a club that looked like a church.
They would meet like a church but ultimately were just basically trying to create new religion.
Um, and I think these, yeah, these conversations are uncomfortable, but they're important because they're I I think it's inevitable that they're going to pop up and, uh, it feels like, you know, potentially can go to a very dark place. Yeah. And I'd say, you know, uh, you know, maybe three different comments there.
One is, you know, technology is neutral. You can use it for good and bad, right?
AI most powerful technology you know that we've created yet in no small part because it's built on the shoulders of all the other technologies you know connectivity cloud etc right you know so it can grow and expand more uh rapidly you know but our job is to make it good right can it get used for cults and other things of course right just like everything you know what was you know what was the early driving use case of the internet right I think the number one websites early on were pornography.
Yeah. Not what we intended, but right at that, you know, gambling and pornography is like, oh man, you know, man, right? So, I have no doubt, no hesitation that there will be, you know, these cultish experiences that emerge, right? But again, our job is make technology a force for good. Yeah.
And where and how do we build safeguards and other into it? Second general comment is, you know, people talk about super intelligence and those, you know, and some of it is, you know, it's like, man, you know, the calculator is a super intelligence. Yeah. It does math and better than I do, right?
And, you know, my Excel spreadsheet, man, I ain't giving that up, right? You know, and I don't want my accountant to give it up. Yeah. Right. You know, and so on. And, you know, we think about LLMs and the futures, man, they're going to become indispensable. Yeah. Right.
you know, because it is like the real time contextually relevant encyclopedia of my life, right? You know, it's got a better memory, better knowledge, better connectivity. I mean, we're going to use AI to conquer language for the first time in humanity's history.
You know, I'm going to be able to teach every kid in their native language contextually relevant using the power of AI. And because of that, you know, almost all of the people that live in poverty live in the fringe languages that are not conquered. Wow.
You know, how exciting, you know, I mean, the extraordinary opportunities, you know, this is going to give us to truly improve the life of the planet. But, you know, ultimately, it's been interesting.
I I've been debating with John uh how you know how bad would it be if I suddenly didn't have access to LLMs and we've gone back and forth on this. I personally would much rather live without LLMs than a mobile device or the internet. Right?
I think some of these enabling technologies that we have that make LLMs valuable and useful and widely available um are feel today more foundationally important but that is coming from a viewpoint where the most of the internet is English most of the media that I want to watch is English and so the point you made about um you know being from a you know being from somewhere in the world that English isn't your native language being able to educate yourself on on um you know and and watch any content you know things like that are are pretty exciting.
Um, yeah. Where do you think continuing and going back to you the the core of your question though, you know, will there be AI gods, right? God designed us as humans to be in relationship and any technology that's driving us apart, you know, three teenagers sitting on the couch texting each other, right? How terrible.
Talk to each other, right? God designed us to be relational, right?
uh individuals and you know a foundation of uh religion spirituality you know isn't just the self it's the community right that you're part of so you know I fundamentally uh you know things that take us apart you know I think okay we have to build safeguards to have them be tools bringing us together you know I do not want my AI doing suicide prevention counseling I want a real human doing those things you know I want a real human interacting with kids and teaching and learning, you know, I want AI agents that are helping the teacher do better education, right?
You know, because, you know, of the modalities, the training and so on, but ultimately these are very, you know, powerful, right? Human experiences that we need to keep as human experiences. How have you been processing the uh dialogue around the antichrist? You mentioned Acts 17. Act 17s and Peter Teal.
It's something that's really uh sparked a huge discourse in Silicon Valley that I think a lot of people weren't even thinking about. Uh Christianity is somewhat unique in the the idea of the antichrist. But how have you processed it through your life and then how are you thinking about that concept today? Yeah.
U you know I do think there's uh uh you know you know there was uh you know when I was in my you know whatever 20s or 30s there was a series called Left Behind.
M yeah right it was like oh you know and if you go back and read that you know and uh the late great planeter you know there's sort of been this episodic flow of end times discussions and discussions on the antichrist and what they would be so you know this topic has uh you know only persisted for about 2,000 years you know it just sort of goes way it's lendy it's lendy yes it's been around forever yeah Yeah.
So, you know, you know, hey, having another cycle of that, great because it causes people to ask deep questions. Yeah. Right. You know, to me that is a good side effect to it. Now, from a biblical scholarship perspective, you know, Antichrist is mentioned one time in the Bible and not contextually in Revelation, right?
It's like, okay, this is a thin discussion, right? If you look at it theologically, you know, that way, you know, when most of the revelation and you know, uh, apocryphical uh, conversations in the Bible are actually much simpler, right?
You know, people of those days were looking at they were seeing the antichrist and his name was Caesar, right? You know, in that sense. And, you know, the Bible was written to be, you know, I'll say eternal but current. Yeah. at the same time.
So I think a lot of this it's this episodic flow of excitement uh on the topic but I think a theological uh you know study of it would say calm down a little bit right you know we are living in the end days meaning that you know right you know there isn't another expression of Jesus Christ coming until the end of time you know seen through uh scripture you know because of that you know hey if this causes people to be more interested in religion yes but at the end of the day, let's make sure we have good theology behind the conversation as well.
What about uh millinarian thinking generally, apocalyptic thinking generally?
It feels like uh feels like it's a maybe I'm out of touch with what the vibe was in the semiconductor industry when you were active in it, but it feels like uh the semiconductor companies not historically said if we don't build this the world will end.
But we've seen a rise of entrepreneurs who have been uh talking about apocalyptic consequences to either building or not building what they're working on. And it feels like it's more in the zeitgeist. It's kind of the final conversation you can have about the impact of a technology.
And it's and so far I mean it's not just the AI folks. Even uh Elon Musk talks about the consequences of not becoming multilanetary being apocalyptic potentially. And so we have to explore the stars which I completely agree with.
love space exploration and uh and so I'm wondering about that as a as a rhetorical tool in entrepreneurship. Is that something that you've noticed is on the uptick or has it always been this way?
Well, again I think there's an episodic nature to it you know and let's you know when uh you know go read some of the press when Sputnik Oh yeah. flew through the sky. Yeah. The Russians hear everything right? Yeah. they can read my mind. I just saw it in the sky last night, right?
You know, you know, so my father, you know, talked to me about when he saw Sputnik in the night sky, right?
you know, so right, you know, and I so I do think there's somewhat an episodic nature and when you hit these huge inflection points, right, like AI that is just exploding in capacity and capability, you I think okay, you know, we're in the next episodic discussion, right?
uh as a result you know but I think if you look over history we've seen that you know go through uh over time you know in the first days of the internet uh you know my cell phone is going to you know right turn my my brain to jello and you know all of these other kind of things you know I just don't get too ex oh we might have a technical issue let's give just a second hopefully we can get pat down the deep state took down the it was getting too real.
That was a fantastic interview. We were enjoying that. Hopefully, we can bring back Pat Gellinger. I didn't get to ask him about AI Capex. Hopefully, we'll work on we'll work on getting him back. In the meantime, let me tell you about Turbopuffer.
Search every bite serverless vector and full text search built from first principles on object storage. Fast, 10x cheaper, and extremely scalable. Uh, head over to Turbo Puffer and we have Pat Gel back. Sorry about that. AGI is here, but the internet still doesn't work sometimes.
You know, I I always get a kick out of this when either the cell network uh here, let me expose this as Yeah. Yeah, we are. Network or the internet, you know, uh, drops in Silicon Valley. It's like, man, what's going on? What's going on? This is a place we should get this right.
I I don't know where exactly where we got cut off, but I I wanted to ask you about uh there's been so many insane headlines this year around AI, AI capex, AI talent wars, you know, rewinding in your career in the 90s, early 2000s, like what what of what parallels are you personally drawing?
Yeah, you know, I do think uh u let me give two different answers to that first because you know I mean you right you know you you people are talking about these one five 10 gigawatt data centers right every time somebody says gigawatt the image I want in your mind is a nuclear reactor.
Yeah because a new one nuclear reactor is about to gigawatt of power capacity. Yeah. How many nuclear reactors has the US built in the last 25 years? I think it's one or three at Vodal, right? And yeah, remember China, I think China has 60 under, right? You know, construction today, right?
You know, and fundamentally energy capacity, you know, we sort of lost a decade, right? you know, as we were so focused on, right, uh, you know, climate that we weren't building our energy capacity and renewables. Hey, I'm the biggest believer, but for the most part, they're not economic, right?
And there's a lot of work going into making them economic. You know, some of the playground companies are working on that, you for the most part, they weren't economic and they didn't add that much to the grid, right? You know, so there's this craziness going on, right?
you know of that capbacks expectation and so on you don't have the energy to do it.
Now you know obviously we're working on dramatic improvements to make uh inferencing 10 or 100 times more power effective right because ultimately you know I think the demand for inferencing capacity is the ion infinite right to Jeban's law the gas law you know it really should be permeating every aspect of uh what we do uh going forward you know but some of the crazy predictions associated with this, you know, it it's not going to be chip limited, it's not going to be land limited, it's not going to be capital limited, it'll be energy limited, uh, going forward.
So, and I do think it's inducing a lot of investment into the energy systems, which I view as very positive because our long-term economic growth as a nation will be energy limited. So, even if we're quote overbuilding capacity, I view it as a super good thing. Yep. Right.
Because hey, whether I need that for my electrification, whether I need it, you know, for my quantum computers, whether I need it, you know, for just make my power bill cheaper. If you have more energy to go around, the price is going to fall for average Americans. That's a win. Yeah.
You know, and I mean the oil crisis. Yeah. Great. You know, could become the electrification crisis in the future, right? If I don't So, I just view that as such a good thing. Yeah. because we have so mismanaged our nation's energy capacity uh going forward.
Now you know some of these forecasts are just nutty so like that 100 nuclear power plants next year who knows yeah I mean we did we did have an IPO today Burmy that is planning to make a nuclear reactor by 2038 or something like they they raises money to try to bring it online.
So, I was poking a little bit of fun at that cuz the company's uh 9 months old. They're valued at $17 billion. But maybe it will be net good if if we can bring, you know, a new nuclear plant or two online. Yeah.
And you know, by the way, you know, I just uh you know, I just became the uh general partner lead on investment that we're doing in the nuclear space at Playground. So, I'm learning a lot about this. Great. Right.
And a lot of the you know the small nuclear reactors they're not economic right you can't connect to the grid you can't scale them you know the regulatory things so you I do think it's a very fundamental problem you know for the nation and one hey I'm now diving into that problem as well so you know we need to bring on many gigawatts of additional capacity for you know the future of our nation now you know some of the other you know projections there you know that people uh you know point to, you know, I think they're way over their skis, right?
You know, not everybody is going to add 10 gigawatts of additional capacity. And I am excited that uh, you know, like one of our companies, Snow Cap, you know, we're out to make uh, AI 100 times more power efficient than it is today. Not 10%, not 10x, 100x. It's great, right? You know, as we move to superconducting.
So instead of that gigawatt, you know, I'm going to have a 100 megawatts of cryogenic capacity that produces 10 times more inferencing. Yeah. Okay.
That's industry reshaping, you know, and those are the kind of things that are truly going to, you know, I'll say do a better job of the energy capacity we have and right, you know, creating more ultimate capacity for the future.
And what about, you know, does some of the vendor financing going on, some of the demand guarantees that that people have been making, does that uh is that immediately worrisome to you because of some of the the negative events that that came after things like that in the do era or do you think it's Yeah, we've kind of heard both sides part of the cycle.
Yeah, we we've seen this before. You know, look at all the optical companies, you know, in the early days of the internet, right? Well, you know, everything is up and to the right, you know, for uh you know, exponential growth for unlimited periods of time, you know. Yeah. Okay.
You know, I mean, it was it was a it was a stunning 10 years, but it didn't extrapolate to 30 years. Yeah. Right. And I see most of the AI think you know I I see you know fundamentally no change for the next two three four years but by the end of the decade some of these transformational technologies will be at scale.
So I do see a real shift right uh in them before the end of the decade particularly on uh inferencing cost inferencing performance and power consumption for inferencing which is sort of like the big one uh for broad deployment of AI.
How do you think about I mean there there's a lot of uh founders in our audience who are you know it's it's raining right now and they're they're happy and everything's good. Um but you've been through so many market cycles.
Is there a story or advice that you give to entrepreneurs or founders who are going through a reset in going through a bare market? Uh what what how do you think about getting through those hard times?
Well, you know, so they're going to come right right now, you know, and as a CEO, right, your job is the high is never as high and the low is never as low, right? You know, your job is to sort of, you know, cut the tops off of the crazy highs and, you know, manage through the crazy lows, right? That's your job.
And if you're not ready to do that, you know, if you're the, you know, the super enthusiastic champion that everything's up and to the right, you know, for the next uh two, three decades, you shouldn't be in the CEO's job, right?
That's called a cheerleader, not a CEO, and you know, we have way too many companies that are led by cheerleaders as opposed to CEOs, right? And we can go back over history and identify a number of those. Yeah. Also, you know, like I say, I think the next couple of years, I think, are going to be just fine. Yeah.
But then, you know, we're uh you know, as we get to the end of the decade, okay, things are not going to be just fine. You know, there's going to be fundamental shifts in the economics and the capacity. And a CEO's job is to, you know, get the balance sheet in place that they can navigate through those times, right?
You get cash when you don't need it because when you need it, you can't get it, right? So, you have to be managing thoughtfully, you know, about when those disruptions occur.
So you know I do think uh you know particularly a couple of years out from now okay that's your job as the CEO as the leader as the entrepreneur do I have enough capacity to take a meaningfully negative time right and be able to go navigate through it what about team building leadership uh how should you be communicating with uh potentially thousands of employees who are looking to the CEO for leadership not just cheerleading well you know you're you know the first job of the leader is to represent reality right you know where it is and you know that is the first job here's where we are life is great but it's not as good as that you know life is bad and here's the situation and it's not as bad of that first job of the leader being that point of truth that point of reality you know as I say you know the leader's job is to communicate the vision over and over and over again to the point that you are absolutely sick and tired of it, right?
You know, you think there can't be a single person that hasn't heard you communicate the vision and just about the time that you are absolutely dreadfully bored by it is the first time it's really getting through. Right? It was just that hard.
And about the time your organization knows it, usually one to two years, it takes another year or two for the customers to understand it. Yeah. Right. You know and if you change right in less than three or four years nobody understands it. Yeah.
You know so right that constencancy of vision purpose mission is a super important thing you know for uh the leader you know preparing for the bad days like we already touched on.
There will be bad days uh that you know you have to navigate through you know and then when you have success you give it all to the organization and when you have failure you claim it allow in a transparent you know uh a facing way you know this right and that's how you build you know loyalty and commitment uh from your teams you know and being a CEO is not cut out for everybody period stop you know if you're not ready to Take accountability if you're and how there's a lot of legacy enterprise software companies that are needing to res react and respond and evolve as you know LLMs and generative AI sort of uh comes online and he said something I wanted uh to get your thoughts on and and maybe some examples but he he believes it's easier to change your technology ology stack than your business model.
And he was saying in the context of it's great if you're a software company that started and is charging based on value, but if you have seatbased pricing today and your competitors are now selling, you know, not not trying to get you to scale headcount, but just trying to deliver more and more value.
Uh that can be tough. I I I wanted to ask if there was any notable kind of moments in your career where uh you know uh you you witnessed a company sort of be able to effectively evolve their business model um and andor on the technology side and and how those two things play together in your view? Yeah. Yeah.
And you know first you know Brett you know just he's a worldclass guy. Yeah. So, if you have the chance to bring Brett on or Pat on, take Brett. You know, he's Well, we we've both Why not both? But Brett is great.
Um, you know, the uh you know, we took uh VMware through the transition from enterprise licensing to a SAS business model. It is butt ugly hard, right? You know, a business model is like a deep rut, right?
You know, one of my favorite posters on this was, you know, it showed a deeply rutdded Indian road and the caption read, right? You know, pick your rut carefully, you'll be in it for 100 kilometers, right?
You know, and there's sort of this truth to a business model because your competitors get used to it, your customers get used to it, your supply chain get you, you know, you just get funneled into it. And when you try to change your business model, man, it is dreadful.
you know the contractual, the legal, the renegotiations etc. You know, team compensation. Yeah. Everything is hard about it, you know, because you know, you build it, you know, every aspect of the business uh around, you know, those business the characteristics of the business model you've chosen.
So, you know, Brett is right on changing the technology is quite a bit easier than changing the business model. Now you know the master stroke is if you can keep sort of that interface to the customer and the business model the same and do a major retooling right of the uh technology underneath it.
Okay, now you win, right? And by the way, you know that, you know, the first major success in the enterprise to SAS conversion, you know, my uh, you know, friends at Adobe, you know, they navigated through it.
You know, I mean, that was uh, you know, uh, you know, their master stroke and many others followed in their footsteps uh, after it, but that was hard, you know, for the transition. Now, as we go to AI, most are going to be in a SASlike business model uh, going forward.
You know that said the user experience in an AI native application is dramatically different right you know you know you know and if we think about it you know and I you know in very broad terms the computer has I have been adapting to the computer for the last 50 years in an AI world the computer adapts to me right you know if you think about the fundamental shift of that where now I am speaking to it on my terms.
It's hearing me in my way, you know, it's understanding my phase, my recognition, my thoughts, right? Contextualizing them, you know, in my language and understanding, you know.
So I do think the idea of just flipping to a SAS AI business model, you know, versus an AI native experience that truly harnesses the full breadth of that power.
I think many uh you know, people as they talk about the AI conversion of their applications wholly underestimate what will be associated, right, with truly creating an AI native experience for the future. Everything changes. Yeah.
Not just even business model going maybe from seatbased and and fixed contracts to value or selling the work but it's even the entire user interface and the the the like what does the application look like? What does it feel like? Is it feel like hiring and managing a team? Does it feel like hitting buttons yourself?
Yeah. So doing both of those things at the same time. Okay. Actual actual last question. Is it true or do you still have a VMware tattoo on your arm? So, uh, no, I don't. Okay. And it was so funny because when, you know, my, you know, but you did at one point. I I did, right? But it was like one of those 30-day tattoos.
Okay. Okay. That's great. So, you know, and I came home from that trip and my wife, you know, somebody sends the video to my wife. Yeah. Right. you know, the tattoo artist and everything like that. And she says, "You may not have that on your arm by the time I go on vacation with you. " That's amazing. Yeah.
People think tech uh tech people are hardcore today when they launch like a video or something or they put up a billboard like the the the 30day living with your brand for 30 days is a new level and we thank you for your service to the technology industry, Pat. Thank you so much for being on the show.
This was fantastic. Thank you for the work that you're doing. It's very important. I'm very excited. We I mean we'd love to have you back dive into all the different portfolio companies also what's happening on the on on the glue side in terms of actual productization.
I know we spent a whole bunch of time in the AI world and talking about the other labs but uh there's so much more that we could go into and but we'll give you back the rest of your day. Have a great day. Thank you. This was a lot of fun. Thank you so much. Thanks so much. Talk to you soon. Cheers. Bye.
Let's pull up this video of the Meta Rayban display. Be right back. I will react to this. The Meta Rayban display has a new video that we are going to pull up. In the meantime, let me tell you about profound. You've heard that you can now buy products on chat GPT.
You're going to need to get your brand mentioned in chat GPT. You can