Tyler Cowen: AI adoption is being wildly underestimated, and America's AI lead is a form of soft power over China

Apr 14, 2025 · Full transcript · This transcript is auto-generated and may contain errors.

Featuring Tyler Cowen

do audio for now and maybe Okay. I have used the video lately. It may be on your side. Maybe. Um well, we'll troubleshoot it. Let's just talk uh like it's phone call. Okay, great. Um I wanted to start with uh artificial intelligence.

Uh there's a there's a new release from OpenAI today, but it's mostly about uh programming and a new code model, but uh can you give us an update on how you're using uh just any AI tools really and what your experience has been? Well, I use AI tools for everything. That was almost an understatement. Yeah.

So, I just took a trip to an event in southern Utah and I used uh strong AI to plan the whole thing, the whole route, where I would visit, where I would stop, where we would eat. Uh it worked wonderfully. Uh there were not hallucinations.

But more typically, if I'm reading a book, uh most of the books I read are history books. So, I want background on things that I don't know the full story for. And uh then what I'll do is just keep on asking the AI.

So instead of reading say seven books on a topic, I'll read maybe two and spend the rest of the time uh asking the AIs. So that's a very different approach. Yeah. But you get very good customized answers and you can keep on asking. Uh it's right in front of you. You don't have to wait for the book to come.

At the margin is cheaper than ordering more books. So it's changed my whole life. Do you think the future is not ordering the book at all and just talking to the AI?

Because a lot of these books are so baked into the LLMs, you could just you could just hear about a book and then say, "Hey, tell me about the first chapter. " And then it'll kind of spit out the basic or using memory functionality, you can say, "Tell me the tell me the things that you think I'll actually care about.

" Oh, yeah. From this book. Yeah. How do you Well, you're saying the future. I would say it's the present. Yeah, I guess you're right. Now, you might want one or two books to get you started on your questions.

I'm not even sure you will need that at some point, but I don't think my status quo practice is to have no books, but it's to have radically fewer books. Um, what about kind of the uh the post education piece of reading these books?

Are you are you doing any uh interactions with AI to uh kind of like take notes and memorize things or or kind of collect your takeaways while you're reading and learning new new topics? No, I've never really taken notes on things with or without AI. So, I don't I don't think I'll start now.

Now, as you mentioned before, there is now AI memory. Yeah, I've only had that a very small number of days. Uh I'm not sure what I'll use it for. I'm not sure it will matter for me, but uh we'll see. It might send things my way I wouldn't have known of otherwise. So, I do think about what queries I feed into the AI.

I'm not worried about privacy, but I just want to be smart for it so it thinks I'm smart and treats me accordingly. Yeah. What uh what about deep research? uh is has that worked its way into your workflow or has kind of the 5 to 10 minute delay been a barrier to adoption? Uh the delay is no problem for me.

I've been using 01 pro more than deep research because often I just want short answers. Yeah. Uh I've used deep research quite a bit for my class. So if I want them to read a paper on something, well I can look on Jtore or I can just have deep research create the paper. Mhm. and it does that very well.

I think it's still a bit wordy, but it's very impressive and and really quite accurate. So, it is double or triple checking things. It hallucinates less say than Google or Wikipedia might. So, I've been using that. It's gone quite well. Can you talk a little bit about uh the the workflow with teaching?

You mentioned last time you talked something about uh kind of a a flip to the way you teach where uh you are judging the students on how much they teach you. Can you kind of reiterate and expand on that idea? Well, one thing I do in this class, this is a PhD level class in history of economic thought.

I tell my students they need to write their papers using advanced AI and I grade them just on how good the paper is. They then should explain to me how they used it and I will indeed learn things from that. But we're already in the world where so many people are using this, how well you can write a paper without AI.

It's not any kind of predictor or useful indicator of how well you're going to do. So, we just need to start teaching them that skill. So, I teach them what I know. I encourage them to provide tips and advice to each other. You know, we're all new at this, right? It's a new thing.

Uh, and we're all learning as we go along. So far, it's been great. How are you thinking about um AI and the future of work, job displacement, um, all of that? Well, you say future, but I would stress the present. Yeah.

I think a lot of companies should hold off on hiring people with particular skills because those people will not be as good as the AI is likely to be, you know, a week or two from now would be the blunt way to put it.

Are there different sets of skills that companies should be hiring for like a certain I don't know if it's like neuroplasticity?

Uh we like to joke about uh a concept of like golden retriever mode being uh friendly and uh focusing more on friendliness than intelligence because intelligence is becoming too cheap to meter.

But being a great co-worker and being a great interface between different parts of the organization and ultimately AI models who provide the intelligence uh potentially is is growing in importance. What what's your read on that? I use the word charisma for your friendliness. Yeah.

Uh they also have to inspire the other people and simply being friendly doesn't do that. You might need a sharper edge. So a lot of people in VC, I wouldn't say they're unfriendly, but friendly isn't exactly the way you would describe them. They're they're inspiring first and foremost.

I think also having good taste is very very important. So someone still has to interpret the products of the AI or you know decide what's a good answer or which model to ask a particular question and that's a question of taste. So taste and aesthetics have become more important.

What about um I mean is it is it useful to look at the initial roll out of the internet where it felt like wisdom or knowledge became too cheap to meter and that maybe became less important to have someone on your team who just was an encyclopedia of facts because it became so quick to look up everything.

Been through this before, right? Growing up, yeah, you know, being born in the 90s and then having access to the internet almost as soon as I was very conscious. Yeah.

I was intuitively aware that memorizing sort of facts uh didn't feel like such a valuable skill set in the classroom because you would go home and you'd be doing your homework and you'd obviously be sitting in front of a computer and have access to that information.

Yet, it was still baked into the curriculum pretty intensely.

But you're basically saying what is a curric almost like what does a curriculum look like if intelligence is like well you don't need to focus on wisdom and maybe you don't need to focus on intelligence and maybe it's just purely charisma from now on I don't know Tyler well I'm not sure of all the net effects here but keep in mind the people who know a lot and understand it that's an important qualifier uh they're now you know a thousand times more productive than before because they're managing these armies of AIs so you you may not need to hire more of them but the ones you hire who are good will be much more important by a lot, not just by a little.

So I don't think it's simply substituting away from intelligence. You want beings who can manage other intelligent entities, humans or AIs and a lot of those people will be very smart. Yeah. Uh do you think the bicycle for the mind metaphor is uh still apt?

I mean uh bicycles are great but it's better if you're Lance Armstrong, right? Uh sure. I'm not sure I know the metaphor though. Uh this was Ste Steve Jobs said uh uh a computer is a bicycle for the mind.

Uh in the sense that uh a human on a bicycle is the most efficient uh energy per uh mile per hour uh device even faster even more energy efficient than a cheetah.

Uh, and so a human by by themselves is uh underperforming relative to a cheetah, but you give the human a bicycle and it's the most energyefficient mode of transportation, I think. Oh, sure. Yes, that makes sense to me.

So, I think some mix of knowing facts but understanding them, having great taste, and having the initiative to manage an army of AIS and the willingness to do the juggling involved. It's a very complicated set of traits. Mhm. But my sense is the people who have those will do just very very well.

Um can you talk about you know you wrote recently around uh American soft power and AI. If you had to sort of summarize your your takeaway from from the article, you know what what would you share with our audience? Well, it makes me more optimistic about America that we're the AI leader.

You know, the Chinese models, Deep Seek and Manus, as you probably know, they're based on American AI.

So, as the Chinese government uses AI more and more, it will be more dependent on Western modes of thought, and they can censor the AI on Taiwan, on Tienman Square, but they can't really change how it thinks without making it much stupider. So, we're taking them over is one way to put it.

Not in the sense of conquering them, but the Francis Fukuyama vision I think will be realized through AIs. Can you expand on that? Well, the smartest entities in China, you know, already, but more and more as the future arrives will be AIS and those are American again, even if it's Deep Seeker Manus.

So, their smartest entities all of a sudden are American. How would we feel, you know, if all of our smartest entities were Chinese? we'd be like, whoa, well, that's the position we're in now. That's a great take. I like that. Um, you have something, Jody?

How do how do you think of uh tariffs in the trade war in the context of the AI revolution? We we were joking, not not joking, it's a serious topic, but as these sort of tariffs, you know, were rolling out. Um, I I was saying this almost feels like in some way, you know, picking up pennies in in front of a steamroller.

Um, and you've said before that you don't sort of believe in in sort of this almost instant massive GDP growth, but it still feels like AI has the potential to transform our, you know, economy by 10,000% and tariffs can have an impact, a very significant impact on a bunch of different factors, but maybe not even necessarily as impactful and maybe the wrong thing to be arguing about as a country.

But I'm I'm curious to get your take on on on it. The AI race is much more important, but to win that, you want free trade in the inputs for AI, which is quite a few different things. Now, you might want export controls on China, which we have to some extent.

I'm not sure they're effective, but I don't see any downside to trying them. Uh, but in the meantime, you want to just take in everything as much as you can, as cheaply as you can, as quickly as possible. So, I would say it's an extra reason to be skeptical about the tariffs.

You I mean, you said there's there's maybe no downside to trying the export controls.

Are you familiar with Ben Thompson's new argument that uh maybe the downside is that it makes an invasion of Taiwan more likely and in fact keeping the Chinese dependent on Taiwan increases global stability if you take away the trade restrictions.

I don't know that I'm fully in support of that, but that's the argument that he's been making. I've discussed that with Ben. I don't think it's impossible that he's right. It's just very hard to predict that kind of thing. I'm not sure the expected value calculation falls his way.

Again, I I would gladly admit export controls may not work out well. It just seems odd not to try the first order policy to slow China down. Mhm.

I suspect whether or not they invade Taiwan on whatever date, uh, they'll just develop quality, you know, chips and lithography themselves and it probably won't matter that much. Uh, but trying to forecast their their Taiwan decisions, it's just it's very hard to have a good theory of that one way or the other. Yeah.

Uh, how excited are you about American semiconductor production? We've, you know, there's the the TSMC facility in Arizona seems to be having good results. Nvidia is talking about, you know, partnering with Foxcon to produce 500 billion dollars of of their new Blackwell chip.

Uh, is all of that, you know, market how how real is that in your mind? Is that something that, you know, we think that you think the United States can sort of lean on when it comes to the sort of broader AI race? I read and hear a lot of propaganda on that saying it's going very well.

I don't feel I have trustworthy sources of my own. As an economist, my view tends to be supply is elastic and if you pay for something, you'll get it. So, I suppose I'm inclined to believe the propaganda. Uh but I'm still not sure yet. Makes sense.

Um uh what has been your reaction to uh Ezra Klein's new abundance agenda? Well, it would be much better for the Democrats and the Democratic Party and indeed all of us if the party became about that and that's Ezra's main goal. So, in that sense, I'm fully on board. Yeah.

But that said, I think one has to go a lot further. And I had a podcast with Ezra on my own podcast. And I'm like, well Ezra, are you willing to fire a lot of these people? You know, they're in the way. There's cliocracy plus AGI is coming. And he and Jennifer Pulka, they both seem like very reticent to me.

When you push them on, okay, I agree, but let's go a bit further here. Like you can't just be right 10%. If you're right, and I think you are right. you know, you're right, 75% or more. So, let's take this as a good step one and see it through consistently.

Yeah, it seems like one of those uh platforms that's almost directly out of the West Wing where it's inspirational. It can win, but will it actually work? And those are two different questions. And I think he might this might be something that is a message that can win but maybe not change things is is my fear.

But I don't know if you if you feel the same. Well, I don't know if it can win. So, I suppose my view of the Democratic party is that it will splinter the way the Republican party has splintered.

So, the Republicans right now to some extent they're unified around the figure of Donald Trump, but intellectually they're all over the map. That may have upsides, downsides, but I think it's clearly true. And I suspect the same will happen to the Democrats. So, the abundance thing will be one faction of 17.

It'll be the one I'm rooting for. It'll win some partial victories. That'll be nice. But I don't think it will ever be in charge. I don't think any of them really will be in charge. Yeah. Do you think it's more driven by the personality of the particular candidate?

Because uh with the Trump election, you would see a diehard libertarian voting for Trump, you know, in the same in the voting booth next to them is someone who's highly protectionist in favor of tariffs. And they both kind of wound up rallying around the same person for very different ideological reasons.

My guess is that will happen to the Democrats also. I mean, Biden was like the anti-personality candidate. He literally had no personality. you couldn't go meet with him, you couldn't see him on TV, there wouldn't be a press conference. It's sort of like I mean he he didn't exist in fact in a way.

Uh and I think they'll rebel against that and way overshoot. Yeah. Jordy, do you think we need uh more weird uh ideas around AI? You shared something earlier out of uh Google.

uh they're they're using uh Google's uh I think it's their deep mind team is using Google AI to help decode dolphin communication which just feels like it feels like right now there's a lot of attempts on uh which I think are important of unlocking the power of AI in private equity unlocking unlocking the power of AI for lawyers yet there's this whole other sort of spectrum of of ways that you can apply this technology and I and I personally I would like to see more more you know maybe there's not immediate commercial opportunities but at the same time uh you know I can imagine US consumers would probably pay $100 a month to be able to communicate with their dog you know so like maybe there are uh sort of exciting commercial opportunities and sort of humanto animal communication but how do you think about you know needing um sort of newer uh more sort of creative ideas uh as this technology gets broadly adopted strong agree I want to be the first human to do a podcast with a dolphin I feel I already communicate with my dog.

He has little more to say than what I get already, which is I want to eat. I need to, you know, go in the backyard and so on. Uh but yes, I think the non- elites will come up with a lot of these ideas and they'll be hugely successful. Why? Why non-elites specifically?

I mean, that example was from Google feels like the most elite people in the world or like the most elite team in AI at least. I don't know who there had those ideas. Sure. Uh talking to the animals is a Dr. Dittle thing, right? Is he an elite? I don't know. Yeah.

But I think uh elites feel threatened and indeed will be threatened by AI because it's smarter than they are. And a lot of non- elites will just be like, how can I make this useful for me? Yeah. Because they're not expecting to be, you know, the so-called smartest person in the room. Yeah. That's why.

Speaking of that like kind of weird useful AI, how did you process the Studio Gibli moment a few weeks ago? Well, no elite would have predicted it, but obviously people loved it and just something about it worked.

Uh, I hope it didn't, you know, fry the servers on the cloud computing, but you know, the stuff is still up and running. And there was something online, I forget the numbers, but some radical upsurge in use of chat GPT in the last few weeks. Sam gave the numbers.

just stunning and we're gonna have a bunch more of these moments in the next year or two. Do you think there will be a kind of a a cohesive renegotiation around intellectual property that comes out of the AI era? Uh, I hope not.

But, you know, media is in big trouble because you can read a smarter, clearer version of the story on the AI tailored for you. There is a free rider problem here. Mhm. Uh I don't know what kind of arrangement we'll come up with. I don't want government subsidized or government controlled media.

Uh maybe it'll be some weird barbells equilibrium where there's like the New York Times and then there's bloggers and and tweeters and not that much in between. I don't know. I do think it's a real problem. But to make the AI companies pay for everything the model reads, that strikes me as a bad idea.

And I would rather win the AI race with China than do that. Yeah.

I mean, Nat Friedman was just uh kind of running the numbers and saying that uh OpenAI has more than enough money to hire every single journalist in America on a full-time salary just to create content that eventually goes into the training and into the models.

Uh which is a very very odd outcome, but financially it could work. I I don't know if that's actually how it'll play out though. Yeah, Google has a lot of money. I'd rather see corporations do it. Sure. Bloomberg, where I used to work, both has money and does hire a lot of journalists and that's very high quality. Yeah.

So, there's a number of models. I just prefer to keep the government out of it and not to slow down the AI companies either. Do you think that AI is broadly priced in yet? It feels like Nvidia in many ways priced uh priced to perfection.

Uh but it feels like many other industries haven't had maybe the uh and companies haven't had the corrections you might think they would have if they were on the verge of massive disruption. People are asleep. While the disruption may come slowly, it will come.

And I think there's a lot of places, companies with mid-tier quality software that will end up devastated, you know, within five years. Not as quickly as some of the crazy people think, but you know, within a time horizon relevant for share prices. Yeah.

Like I I look at people today the hot thing has been buying accounting firms and like buying an accounting firm that does accounting for small businesses that and and you're buying a com a firm for call it eight times earnings and it's like do you believe that in five like the idea is like in like small businesses will be quick to adopt technology that saves them a meaningful amount of money.

So the idea is like can you keep prices high? Yeah. Grow earnings and introduce technology that basically undercuts you know the existing service offering that you just paid up to get. Uh I I don't know if that's a that's a good strategy. That's a great example.

You know if you could short nonprofits I'd short most of them too. Savage. Well, I mean, speaking of the some of the crazy people, uh, did you have a chance to read or read about AI 2027? And what was your reaction? Well, I always ask those people, are you short the market? And I never get a straight answer.

Those are private conversations, but there's one prominent doomster. His response to me was, well, I don't know how to short the market. I just giggled. I'm like, ask the AI if you don't think the AI can arrange that for you. It's probably not very threatening. Yeah. So, look, any new technology has a lot of dangers.

You shouldn't rule out any scenario. But what I tell those people is do what the climate change people did. Take this to peer review, referee journals, make a serious case. Don't just write, you know, the blog post with 17 different vertically arranged separate points. Yeah. and see where you get with it.

Uh, the other kind of response I have, it's jokey but also serious. I say I would rather be an American paperclip than a Chinese paperclip. So, it's coming anyway. You know, you want to have it on your terms. There is no pause option. We got to try to win this thing. Yeah. How um how do you think about AI adoption?

And I know it sounds like a simple question, but I I feel like this is an interesting technology where every single person that I know and I am in a bubble, right? I live in I live on the coast. I work in, you know, tech broadly. Uh and everybody I know and and even I I was talking the guy who details my car.

I was I was telling him that he should expand into a sort of adjacent category because I wanted it myself. and and he told me that he spent, you know, the entire ride home uh last week, you know, talking about just talking with chat GPT, right? Like using it as a sort of like personal tutor on a business opportunity.

So, it's we're in this interesting scenario where uh AI in many ways seems in sort of certain segments seems to have been like, you know, fully adopted like it's being used uh and you know, and now it's just more about the sort of capability unlock right in in all these different use cases.

But I'm curious how you think about sort of okay, everybody's using AI now. Um, but they're not I mean it's totally brutal. You've got to get out of your bubble. Like a lot of people use chat GPT for something trivial. Yeah. They don't take it seriously.

It's like another app like they used it to name the dogs puppies or something and they have no idea it's going to change their lives. That's the default mode of every elites. Yeah, that makes sense. And I was at a prestigious New York event just like two or three months ago with five people, all of whom are well known.

And I used the three-letter phrase AGI. Not one of them knew what I meant. I don't mean they didn't understand it in a deep sense. Maybe none of us do. They didn't even know what I was referring to. That's where we're at. Wow. Um Yeah, that that that's shocking. Um how do you think about uh economic growth?

It feels unimaginable that AI would not be the thing to disrupt stagnation and get us at least to three, four, 5% real GDP growth annually. And yet, it doesn't feel like we've seen it yet.

Even in energy production, I think China's adding 20% energy uh production a year and it's still essentially stagnant in the United States. Um, are you expecting things to change or will it just be a reshuffleling of the deck even though things will be disrupted?

There's not really a net new experience of our economy or or our energy infrastructure. I think it will improve slowly. But if you look at our economy, and I'm going to sound like Peter here, like just add up what percent of it is totally non-functional, non-adaptive, bureaucratic.

much of health care, most of government, higher ed, K through 12, nonprofit sector, like that's more than half the economy, right? Yeah. And at what speed will that respond? The stuff that responds quickly, which is great, of course, it becomes so cheap, it it's an ever smaller share of GDP.

So, it it's very hard to get the growth rate up by a lot. I think it will happen. But one way to put it is the better the AI is, the more the human imperfections matter. We're already at the point where they're going to put out new models. And most people aren't smart enough to see that they're better. Yeah.

So, I'm all for the new models. They will ultimately matter, but you need to restructure, rebuild almost every institution for the growth rate to truly accelerate, and that will happen. But it's a a generational project. Is that a warning sign?

I I mean the the counter to every uh to every AI doomer has always been in my in my mind like we the the cars aren't even driving themselves yet fully. Like surely the cars will drive themselves for a couple months before the cars are terminators that are killing us all.

Um, and uh, and is that kind of like a like an early warning sign for taking AI more seriously, or should we already be really discussing AGI at every uh, fancy dinner party that you get invited to? Well, I think we're going to have AGI this coming week, so depends how you define it.

I think driverless cars, as they spread, will have a big impact. And they're coming to Washington DC this year. Oh yeah, that's an important city for obvious reasons and I think that will change many people's perceptions.

Yeah, because they do work and they are much safer and there are no associated problems except maybe for the two days a year it snows here. So yeah, I think that will really matter. But right now it's what San Francisco and Northern Arizona and one other place. Yeah.

Uh, and the Bay Area people, they are pilled, white pill, black pill, whatever about AI, and no one else is. New Yorkers are the worst. I think DC, the national security people are pretty awake. That's that's very good. Uh, even the staffers can be pretty good.

We're way ahead of New York in DC, but that's not saying much.

How do you uh do do you feel like humanoid robots are are like you know somebody uh you know a Henry Ford alternative saying like I'm going to build a mechanical horse or do you think that humanoid robots can be uh the right uh form factor for uh AI you know embodied sort of AI labor. Robots still seem far away to me.

I'm not dogmatically convinced they're far away, but I would say they're far enough away that it's difficult to forecast when they'll get here. So, my thinking is mostly about the smarts in a box more than the robots. The robots seem to need pretty controlled environments.

Uh, San Francisco streets are that, factories are that, and that's already significant, but just that there's like more robots than people in the world, uh, that seems distant. um on robots that could have a potentially, you know, faster and greater impact um or at least greater in the short term.

How do you think about uh companies like Zipline that are doing instant delivery via drone?

Uh we had the founder of Zipline on last Friday and it was pretty exciting to think about just reducing congestion on roads by eliminating all these big heavy cars driving around you know a hamburger for 20 miles um you know all day long. I'm all for that. It's pure gain, but I think those gains are pretty small.

And we could already do congestion pricing as a few places have done and solve that problem as it is. And most of traffic is not like your Amazon delivery person. Uh it's a modest share of it. So again, I'm all for it, but I don't think it will noticeably change life. Yeah. on on the AGI question.

I feel like these definitions uh get wrapped up in benchmarks or or you know IQ tests or human performance.

Um but I've always wondered if it's almost better to think about the impact of AI through an economic lens and say something like AGI is here when AI is doing is producing GDP greater than humans or something along those lines. Uh, is that a useful framework or am I just thinking about it all wrong?

You know, Satia Nadella said some version of that but with different numbers. You know, my AGI definition, just to be totally self-centered, is when it's smarter than I am, I call it AGI. And I think that's coming within the next few days.

So, I get that's not the only definition, but surely it's a meaningful benchmark of some kind. Yeah. Yeah. Yeah. Yeah. It makes sense. It's just uh the the it's the difference between like fast takeoff, slow takeoff, almost like fast roll out versus slow roll out.

Um we we could have the you know genius intelligence that that's too cheap to meter, but I think people won't care as much until it's actually everywhere having an impact all over the world.

And so, uh, I I think that like once it arrives, it could it could drop just like the passing of the touring test dropped where we just kind of move on with our lives. I have one very short last question I think is relevant.

Tyler, how much smarter do you think you've become due to AI and and using it, you know, dayto-day, if if at all? I don't I'm not sure what you mean by smarter. So, I've been using it lately to learn English medieval history and I know much more about that than I would otherwise.

I mean, more like maybe quality of thinking. No, I don't think I don't feel that way. Uh, you know, in chess that's been a factor, but I think maybe it will help some people who are less rational. Maybe I'm already there and so irrational I don't know it, but I don't feel it's improved the quality of my thinking.

Is that dependent on like discoveries like or or almost like frameworks like if if AI discovers the next you know law of supply and demand that would make every economist smarter, right? I don't think it will. I think there's a lot of areas we're never going to learn much more.

Not because the AI is weak, just there's only so much to learn. Sure. But I think there's a lot of people, you see this already, like the AI tells them to calm down or please rethink that email before you send it. Sure, that's quite significant. I just don't think I'm in that category at age 63.

Yeah, but yeah, it's helping a lot of people think better. Uh just not me. That makes sense. Well, thank you so much for hopping on. We really enjoyed talking to you. Yeah, very insightful. We'd love to have you back in the future. This was really enjoyable conversation. My pleasure as always. See you around. Take care.

Talk to you soon. Bye. See you. Next up, AGI this week, folks. AGI this week. You heard it here from Tyler. Yeah. I wonder I wonder what is he referring to exactly. Is this a new OpenAI release? I don't think it's 41. Okay. Because that was really oriented around 4. 1 because we're at 4. 5. Anyway,