Writer Fredrik deBoer bets against AI economic apocalypse, challenges Dario Amodei's '50% job loss' claim
Feb 18, 2026 · Full transcript · This transcript is auto-generated and may contain errors.
Featuring Fredrik deBoer
we have Freddy Debor from Substack. He's an independent writer. We're bringing Freddy into the TVPM Ultradome from Reream Room.
Freddy, how you doing?
Pretty good. Although, you know, from Substack still never sounds as cool as like [laughter]
from the Wall Street Journal or whatever. Whatever anybody says about like independent media, it's just never going to catch up. But that's okay.
Yeah. I don't know. I mean, do you think social media do you think subs do a bundle? Do you think subs actually do a bundle? This is a big debate.
It's the thing is is like it's like the same question about like um why can't I just pay a dollar for the New York Times article I won? Because the reason why is like the finances just don't work out. Like if I would love to be able to do that from a standpoint of like getting people to read my stuff, but my financial life would collapse like [laughter] like it's it's getting that that regular income in that sort of makes this possible.
Yeah. I think I think my view on it is I think the uh like a a bundle that is just a is systematized via a platform
is going to struggle. But a bundle that it where you get four or five writers that just basically say, "We're gonna build a media company together." That can work. Yeah. Because there's just so many other small things you need to do around compensation and what's fair and who's doing what and all all these different things that I think are uh when you have a bunch of personalities coming together. It's hard to just have it be entirely in code.
Anyway, let's move on.
The problem is that like eventually if you bundle enough people, you just have a newspaper. And I probably wouldn't get into the newspaper business in the 21st century, but
I love newspapers, though. I got I got
Oh, I love them. But I'm not investing in one, though.
No, no. I I think it's uh it's uh the time the days are numbered. I might be the last person subscribing. Uh anyway, uh give us a little bit on your background since the first time on the show. Uh how'd you get to Substack? And then uh I want to talk about your wager and the future of AI, the impact on the economy, all of that.
Yeah. So I uh I mean I'm a writer. I you know was an academic. I was in academia for years. Um I used to work for the City University of New York. Um but now um frankly I find uh not having a boss or a schedule or ever having to get up on any particular day to be uh very attractive. And now I just write books and I write my Substack and you know has enabled me to buy a house and you know so it's you know it's uh it's a pretty good living.
That's kind of a dream. Uh so tell me about your wager. How did this happen? What is how did how did you come to define the the the actual bet and uh what's at stake?
Yeah. So I um I'm [clears throat] frustrated by the AI conversation.
Um I think that it is I don't know if you guys are familiar with the concept of a Mott and Bailey argument. Mton Bailey. Yeah. So M Bailey is
Yeah. For those at home, it's just like
um you are you make a very sort of extravagant argument and when challenged you retreat to a simpler and easier to defend argument. [snorts] So you might say the Christian God is real and built the universe and he rules over everything. And then when you're challenged you say, "Oh, well God's just a feeling and God's in the in the wind and whatever." Right? That's like a mutton bailing. Mhm.
I just think that that's all over AI where the CEO of Google is saying that this is bigger than um fire and electricity.
Um and people are saying it's going to end death etc. Um but then when challenged it's like hey you know these LLMs like you know they uh they might make le like you know going through legal documents you know a much more efficient process. There's this constant sort of back and forth.
Sure. As far as the wager goes, um the people in the AI world kind of come from this sort of rationalist CI a Silicon Valley sort of um culture and they say you should be very sort of objective [snorts] and specific in your predictions and you should put money on them. And so Scott Alexander is a guy I've known for a long time. Um the the blogger of Slatear CEX and now Vastro Codex 10. Y
he is a AI enthusiast. He was a signatory on the uh AI 2027 document. Yep.
So I just I I challenged uh Scott and said I believe that uh 3 years from now we'll be in a more or less normal economy
and that was chosen because you know AI 2027 you know this is like going to 2029. So I felt like it was giving him enough sort of wiggle room
and I just defined a bunch of economic indicators and said that if any one of these indicators are violated he'll win the bet and I'll lose. Wow.
And the reason to do that is just I'm I'm looking for someone to put their money where their mouth is about like is this actually going to cause a white collar apocalypse and all these economic sort of things. And I mean he said no and would prefer to do a 10-year version. So we're kind of looking at that right now.
Okay. So I have some of these unemployment must stay under 18%. That what are we at now? Four 5%. That feels like
see but this is this is the point.
Yes. This this is the ma and bailey right Dario Amod the CEO of anthropic last week in a interview with the New York Times said that within the next couple of years uh 50% of all jobs are going to be destroyed right
it said 50% of all jobs I thought it was early stage white collar labor no look look it up he's said 50% of of all the jobs in the economy are going to be eliminated right and this is the thing that bothers me this is why I made the bet
because I ran I yeah I actually ran the numbers on new argument, the Bailey, which is uh entry level white collar work. I mean, the US economy is only 60% white collar. Early stage is, you know, a couple percent of that. And so, you're at a percentage of a percentage. And quickly, if you lose 50% of that, the unemployment rate goes from 5% to 8% 9% like that statement, that new statement can be true, but it cannot be it can simultaneously be not that disastrous.
Right. So the the the CE not the CEO but the um
the head of AI science at Microsoft just made a very similar sort of announcement.
Sure. Um and and this is what I find just endlessly frustrating about this conversation is it cannot simultaneously be true that we are imminently facing a replacement of an immense number of jobs in the economy thanks to AI but also 18% is like a extravagant figure for me to set for this bet right if you if you really believe these things. And so I I'm just never sure how seriously these AI people take it. In part because the the CEO of Anthropic and the head of AI at Microsoft and almost everyone else who gets quoted in this domain has a direct financial incentive to exaggerate the impact of AI. Mhm. Yeah, it uh I mean Scott Alexander did take the bet though, so he's putting his money where the where his mouth is and uh and is certainly on the other side of this. Correct.
Well, we we are looking at the at the conditions. He wants to do 10 years instead of three.
Oh, interesting.
And so this is it's actually turned into a kind of an interesting econometric debate, right? So
people are saying where did you get 18% unemployment for? Then there's something like 40 conditions. I list there's
Yeah. Yeah, there's a ton here. And I said that um we had 15% unemployment you know five and a half years ago.
Yeah.
Right. Because of [clears throat] co Yeah.
And what I'm trying to do is to set up the bet in such a way that a nonAI source is not going to screw me. Right. you know and
oh sure
in the in the great depression which obviously was not AI driven we had um 28% unemployment at some point right so um and it's actually led to a kind of interesting debate about like how do you define a normal economy without letting the natural swings that are common to a capitalist economy
yeah I was listening to a I was listening to a conversation about AI apocalypses and and the person that was being interviewed was like well my pdoom from AI I is extremely low, but I think the chance of nuclear war is like 5%. [laughter] So, it's like they had a high P doom, but not because of AI, which is like a very hard thing to wrestle with. And that's what you're getting at.
Is uh is is
I I've talked on the show a bunch about a bunch of these different groups using basically fear
as as as a kind of motivator to get the to kind of bend the world, right? So if you want people to adopt AI, you should tell them that it's going to, you know, create uh such such insane changes in the economy that any company that doesn't adopt as much AI as possible today is going to is going to be destroyed. Or if you are trying to get uh you know as many young people to adopt a product, you tell them well like all jobs are going to be uh wiped out. or if you're pitching a bunch of uh if you're pitching investors on Wall Street, you can say, well, you know, all these jobs are going to go away. So like it's effectively like the the the in incentive to use kind of like fear is very obvious in all of this. And I think it's now coming back to bite a lot of these people because the broad broader populace is saying, I don't want AI. I'm good. I don't need it. even though they're using the products and they love them. Like almost everyone can tell you an incredible story about AI in their personal life. Like even if it's as simple as like I made this like cool illustration for, you know, my grandma for her birthday and she loved it, right? Or I used it to learn about this thing. And so I think it's interesting. is like using this like intense fear-based marketing to justify uh uh to kind of catalyze uh adoption, you know, uh fund, you know, success with fundraising, etc. But then again, it's kind of coming back to bite in the sense that everyone's saying, well, no, I don't want a data center in my backyard. I don't I don't want my company to even be investing in this, etc.
I mean, speaking of fear, I mean, you just mentioned nuclear war, right?
[snorts]
And it's I I just think that you can believe [clears throat] as I do that AI is going to be a very meaningful technology, but the fact that people are more scared of a robot apocalypse than nuclear war. Look, right now this uh Russia has multiple Bora class uh nuclear submarines off the coast of the east coast of America that have the capability of raining nuclear fire down thermonuclear fire down all up and down the seabboard right the eastern seabboard. I mean like uh you know a single modern thermonuclear bomb detonated above Central Park would destroy 80 plus percent of the buildings in Manhattan and hit parts of New Jersey and Connecticut etc. Right? And that and again this is the M and Bailey thing, right? Which is like you might say, well that's a very extreme scenario, but every day I am opening up my web browser and reading about oh AI is going to exterminate the human race or you know um AI is going to put us into this utopia where no one is ever going to die again, right? And it's like part of what I'm trying to do is just claw out like a normal space in this, right? to just say there is a very obvious future where these tools are meaningful, eliminate some jobs, uh are have a lot of cultural importance, but where we're not suddenly faced with a fundamentally different version of human life.
So if it's not nuclear war and it's not fire and it's not electricity, it's also not the fax machine. Are we talking about mobile cloud the internet like how big is this thing? What does your world model look like for how AI progresses and diffuses through society?
Sure. I we have to understand there's are different kinds of importance and different kinds of influence. So you mentioned the internet and the mobile phone. Okay. Um, it obviously the internet and specifically the smartphone, the iPhone have had massive cultural and social impacts on the United States.
It would have shocked people in the mid1 1990s to learn that we have about the same productivity growth and about the same GDP growth in this country now that we did back then, right? like many many people were invested in this idea that this sort of this missing GDP growth, you know, we're at half of what we were in the mid1 1960s, a lot of people thought, okay, the internet's the thing that's going to restore us.
The internet is very meaningful and it's very influential, right? And yet economically um it hasn't had the effects that are expected. And that's just like that's how history works, you know? That's that's just like there's there is a you always have to bake in the percentage of the degree to which like there's regression to the mean, right? Like we always seem to find ourselves way back to this sort of mundane reality. And I look at things like when everybody got so depressed and disappointed after ChatGpt 5 was released because they thought it was going to be AGI and you had all these lonely guys who were like, "Oh, this is just going to change life forever and now everything's going to change." It's like no it's things are going to change but slowly and in a distributed fashion and you have to keep planning for normal life.
Mhm. Counterpoint. Maybe this time is different. [laughter] May
maybe this time is different. But absolutely. So show me. I mean the here's the beauty of all this. The beauty of all this is like
if this if if the real stuff happens Yeah.
you're not gonna have to convince me, right? like if if we really have AGI the way people think we are, no one's going to disagree because the effects are going to be so profound there's going to be nothing to disagree about.
Okay. How did you interpret this latest uh piece in the Financial Times from Eric uh Brian Falsson? and I can't pronounce his last name, but uh it says while initial reports suggested a year of steady labor expansion in the United States, the new figures reveal that total payroll growth was revised downward by approximately 400,000 jobs. Crucially, this downward revision occurred while real GDP remained robust, including a 3.7% growth rate in the fourth quarter. This decoupling, maintaining high output with significantly lower labor input, is the hallmark of productivity growth. His own updated analysis suggests a US productivity increase of 2.7% for 2025. This is nearly doubling from the sluggish 1.4% annual average that characterized the past decade. It feels like we're seeing glimmers of something changing. Is that not a sign?
I mean, we'll see, right? We have to actually like look at the at the numbers uh as they come down the pike. We also have to be aware that like there's a lot of built-in incentive for people to ascribe these changes to AI. So for example, cutting a lot of jobs is very unpopular and firms tend to be sensitive to that unpopularity, right?
And saying, well, hey, AI came, we didn't make the decision. We just we had to sort of like that's a very sort of easy thing to
Would ever do that? That doesn't make any sense
to but to say that AI [laughter] was the reason to do it. Yeah,
we'll use anything as air cover.
Yeah. And so, um, I I I just I in general I caution people to say, look, it's like I I've said this before, you know, when I was in high school, a very distinguished scientist came to my uh science class and he was on like a the board on like the National Science some some sort of board of the National Science Foundation. He was a like a geneticist and he came and he said like that he envied and also felt bad for us because the the human genome project was going to so radically change human life that we were going to see things that he couldn't imagine but also the job of doctor wouldn't exist in 10 years and this I was in high school in like 1998 this probably happened so you know
studying medicine [laughter]
right right
and you quit
right right and like you know like if you can actually But this is this is an exercise that people can do at home which is to go back like just to Google and look at the predictions about what people thought the human genome project would do. Yeah.
Obviously genetic research in general is very important. But there was a real belief among very intelligent and highly credentialed people that we were on the verge of something absolutely humanity changing. And life's more complicated than that. Yeah.
Um, and again like I I just I I want AI boosters to do more showing and less predicting, right? Show me, right? Like show me the change instead of predicting the change.
Yeah. Well, thank you so much for taking the time to come chat with us. This is really fun. Uh
yeah, we have this we have this we have these debate we have these, you know, kind of debates and conversations all the time. specifically the uh there's a popular influencer on Instagram that every time a tech company does a bunch of layoffs and says we did this because of AI,
he takes that and like makes this crazy story up around how you know AI is just immediately causing all this job loss. And I'm just looking at it as like I know the company they had a lot of bloat. They're getting some They're getting some
efficiency
efficiency increase because of AI, but certainly it wasn't like the the 2,000 people or whatever
were sitting there being like, "Oh yeah, I just onboarded this new agent and now it does everything that I did, including going to all the meetings and,
you know, sitting around all day." So
yeah, there's an incentive on both sides. Both the AI boosters and the AI bears sort of have an incentive to be like, "It's going too fast. It's going it's it's going wrong." If you if you love AI, you want to say it's going really fast. If you don't like AI, you want to say it's going too fast. They're both sort of aligned. So, you get this like super cycle. Very fascinating. Well, have a great rest of your day and thank you so much for taking the time to come chat with us.
Cheers.
Let me tell you about Graphite code