Dean Ball: government threatening Anthropic's existence over a contract dispute is 'not how America should work'

Mar 4, 2026 · Full transcript · This transcript is auto-generated and may contain errors.

Featuring Dean Ball

them fix it fast. That's why 150,000 organizations use it to keep their apps working. And without further ado, let's bring in Dean Ball from the Reream Waiting Room. Dean, how are you doing?

Good to see you.

Good. How are you all?

We're fantastic. Uh I've been enjoying your coverage. We had Ben Thompson on the show Monday. Uh it's very funny that this debate is phrased as the ball versus Thompson debate because uh it feels like you agree on way more than you disagree on. But I would love for you to sort of set the table for us and just kind of give us your run of like what actually happened because there are a lot of things that have been said but not enacted. There's been bunches back and forth. So what's the what's the what are the rules of the road? What are the what are the the the table of facts that you think like are the most reliable here?

First of all, there are very few people I respect more than Ben Thompson and who have inspired me more than Ben Thompson. So, yeah, I think we agree on the vast majority of of tech policy issues and other tech stuff. Um, yeah, I mean, I think basically what's going on here is that, you know, there's a dispute about a contract, right? And that's fine. And grown-ups can have disputes about concept about contracts and they cannot come to terms of business and then they don't sign the contract or they cancel the contract and that's fine. The problem here is the punishment. The problem here is that the government is saying if you don't do business with us uh we'll destroy your company or we'll threaten to and that just can't be the way that America works.

Yeah. Um and I don't really like you know there are people who say including I think you know this was an argument that Ben made that well what should anthropic have expected you know what should they have expected that the way they talk about AI being so powerful how could they not expect the government to to bully them and n quasinationalize them in these ways

and my view is like well no what we should do is have like modest technocratic regulation so that we can avoid these sort of extreme outcomes of either like yeah total decentralization total anarchy or like bull bullshism, right? Like we can avoid that. It's called capitalism. It's called like the system that we have now. And I've spent the last two years being confused as to why we don't try to do that.

Mhm. So in terms of

Okay. Okay. So I guess so far Anthropic has not been labeled a supply chain risk.

Yeah. officially is your belief that the Department of War is just really busy right now for obvious reasons and they haven't gotten around to it yet or do you think that it's coming and and that's why uh you know like where where do we actually go from here?

I truly don't know. I operate under the assumption so War Secretary Pete Heights said last Friday that they're going to do it

um and they're going to do an extremely expansive version of it. So, I operate under the assumption that he's um not lying about that. I basically take him at his word. Uh it'll happen eventually, but I don't know.

Do you think he uh I mean I I I I know that there was some debate over the shape of supply chain risk, whether that means uh you have a partnership with Amazon, Amazon works with the government. You can't work with Amazon at all or Amazon can't use your software specifically, your model specifically on government contracts. Do you have any idea of where the clarity might be found there?

Well, so in terms of what the supply chain risk designation actually allows, my reading of the law is that it would be only in the fulfillment of the of the government contract and not more broadly. But

there's a lot of things. First of all, you could just write it more broadly and try to fight it in court and see what happens, right? And who knows, right? Um you could also jaw bone, right? You could you could have the president call Amazon and say, "Hey, we don't think you should be working with these guys." Right. Very hard to sue about that because that's not a policy action.

Um, and there's also

a very very large range of other things that the federal government could do if it wanted to, if it wanted to really harass, which you know, the irony of this is that we all know this because of what happened to Elon's companies during the Biden administration.

Um, I objected to that then and I object to this now. Uh yeah, unpack a little bit of that history. Take us through it.

Yeah, so um you know, basically Elon being kind of a political enemy of the of the Democrats, the Biden administration brought on a very large number of uh of investigations, regulatory actions, FCC, DOJ, um uh a lot of different other things. I think there there's a chart somewhere and it's like it's like 12 to 20 things, you know, it's a huge number of different things that they brought and each one of those is going to entail millions of dollars in compliance costs and legal costs for these companies. So, it's non-trivial amounts of harassment that's going on and I again I think we shouldn't do that kind of thing.

Sure. Is is this uh broadly under the umbrella of like lawfare?

Yeah.

Okay. Um, walk me through uh this this idea of uh the fissure in AI politics right now is not liberal or conservative, Republican versus Democrat, EAC versus EA, safety versus anti-safety, but instead takes AI takes advanced AI uh seriously as a concept versus does not take advanced AI seriously as a concept. It feels like you and Ben Thompson both are taking AI seriously as a concept. Does that map with your reading on his arguments, your arguments? It feels like there's just a different downstream understanding of how this plays out in a world where advanced AI is a serious concept to grapple with different than making pencils.

Yeah. Right. Like I mean I think the the other way to put this would be like do you think that Daario Amade and Anthropic are directionally and the other Frontier Labs is worth noting are directionally right about where this is going or do you think that um you think they're not and um and also not just do you think they are but do you really take that seriously? Do you own the consequences of believing that

these guys could really be correct? And I think a a consistent theme of my writing over the last year especially has been kind of being confused that there's people who just do not take they might say that like oh AI is progressing very quickly but they don't actually like their downstream assumptions and like conclusions about what we should do don't seem to to to sort of reflect that. It's very hard it's a very hard thing to do but yeah that would be like my broad take. So this is this is where I I I start to get confused because if I take advanced AI seriously. I do then it feels like nuclear weapon level technology in a few years. Maybe it's five, maybe it's two. But that leads me more into the Ben Thompson camp where I say, well, of course there's going to be a a dramatic showdown between the US government and the private company that has invented something akin to nuclear weapons. And so walk me through how you are thinking of you know if we're developing new new you know powerful technology what how should the government actually interact with that aside from like the lawfare in the short term like what's the long-term solution? Yeah, I mean the long-term solution, I think, is probably going to be a hybrid system where there just is regulation of these frontier labs. Uh, and there is government oversight of them, but it's just like it's like, you know, JP Morgan is a private entity that lobbies against government policy all the time, right? They're a totally independent actor of the government and yet they're also deeply intertwined with the government. I'm not saying that we should regulate AI companies like banks. I hope not. But I'm saying that like you know technocratic regulation exists in you know capitalist republics and it's not communism and we didn't like nationalize the banks right um so it would be my proposal would be to sort of take stepby-step approaches uh to solving actual problems not just doing regulation for its own sake but really trying to shape this so that it isn't some sort of violent or or not literally violent but some sort of conflict and instead is you know a gradual development of like yeah this is how the government and the public ultimately achieve oversight of these labs because I would agree that um if we are going to this place of like near-term super intelligence or something

that isn't going to just exist in an unregulated fashion that's kind of been my point for you know for a year and a half now.

Yeah. So what's the gap between regulation and nationalization? Because at a certain point, if we're in the nuke analogy, it feels like the government takes that and then they get to decide where they they point those nukes. And and as I was reflecting on the uh the creation of the the atomic bomb, I was sort of left like optimistic about the American project and optimistic about the way that played out. I don't know if you have a different side, but I feel like the government did get control of nuclear weapons. There hasn't been a nuclear war. Uh, and the whole concept of like the person that you're voting for will have their they'll have the nuclear football. They'll have their finger on the button. Do you trust them? That is an animating force in our electoral process. Whether you like the current administration or the previous administration, whether you trust them or not, but like Americans vote on that question every four years. And I think that that's a good system. I I I I like that.

Mhm. 100%. I think that the nuclear bomb analogy starts to break down. Okay. But one thing I'd say about about nuclear weapons is that it's true that nuclear weapons, you know, we we got that and that went I think basically pretty okay as far as these things go, but you know what we didn't really get is nuclear energy to the nearly to the extent that we could have.

Completely agree with you on this. Yes. Continue.

Right. And so that created this single regulatory sort of point of failure. Single points of failure are often problematic in complex system engineering. So um you know what do you what do you do? Well you know maybe maybe we shouldn't have done it. You know maybe we should maybe the government effect on the sort of the equivalent of the consumer side the economic benefits. We didn't really get those as much as we could have. So that would be one thing but the other thing is that even nuclear energy you know

it's not directly useful to you. It is not nuclear energy is not like you are not personally going to express your liberty,

exercise your liberty through uh nuclear energy or nuclear bombs, but you will with artificial super intelligence and you certainly will with today's coding agents, right?

And so the question is like should the government be able to own that? And my view is that without really substantial guard rails um that is just almost certain to devolve into a profound act of tyranny. And so I think that having the companies separate and apart from the government but still overseen by the public through regulation that is kind of like my view as to what is ideal. But the problem is that saying that view um a it's really hard to get that right. Regulation is tough to get right. You want to get it exactly right. And it's possible that all my ideas are terrible and they're, you know, they're all going to go to [ __ ] right? That's totally possible. But what you want to do is you want to um yeah, you you wanna you want to at least try try to get that right. The problem is that that has been that view has been characterized very often as sort of supporting regulatory capture.

Mhm.

Um so it's like, you know, what did Anthropic expect the way they talked about these models? Well, maybe they expected for like California's F SP53 to pass and for there to be like modest technocratic light touch regulation that only affects the Frontier Labs and like I don't think that's lobbying for regulatory capture, I think that's actually just the prudent thing to do.

Um, even if I don't agree with every regulatory impulse that Anthropic has, I think that in broad strokes that's the play.

Yeah. How do you Oh, sorry, Jordy, you go.

Yeah. I wanted to ask like how how much do you think information asymmetry played into the dynamic of the negotiation and the results? I mean, you have the Department of War who presumably everyone on that side has some idea that we're about to enter a conflict and Friday comes around. The Department of War knows they're hours away from a strike. Daario according to Emil like isn't kind of engaging with him and out of that maybe the admin says hey this is not a partner that we can rely on and we don't want other groups in the government to be relying on it as well and it just kind of blows up.

Yeah. I mean so I think some of this is about personality some of this seems like it's about personality clashes. Some of it's about politics undoubtedly. I think some of it is also about principle though, right? Like and I think the DoD is not crazy at all. Like my my problem here has never been with the DoD's policy principles here. They're saying anthropic can't put usage restrictions that look like public policy cuz we set public. Dario Amade doesn't decide when autonomous lethal weapons are ready for prime time. We do. And I think they're 100% right about that. But the solution to that problem is you you don't try to destroy the business of anthropic. You tell Daario no thanks to the business. Um

and uh you move on and you find somebody else who will who will do business,

which is what Daario said in the CVS interview. He said there's other providers we can agree not to work with each other.

Sure.

Yes. Right. That's fine. That's fine. I mean, frankly, the tokens are probably higher margin for anthropic, you know, sold to private entities, not to the government because they're sub Anthropic is subsidizing them for the government. So, it's a bad business as we're seeing.

What's a what what's a what's a good outcome here? I think we cancel the contract and and look the other thing is like the DoD says um well we have all these other contractors there's Palunteers of the world and Palunteer might rely on anthropic and so we can't have our our contractors can't be relying upon anthropic because if that's true then we're still relying upon anthropic so it's fine for them to say no one who has usage restrictions like this can do business with any of our contractors I think that's fine for in the fulfillment of DoD contracts like I think that is perfectly fine.

Isn't that basically supply chain risk by any other name?

Well, no, because supply chain

sorry

supply chain risk would be specific would be at done at the company level as opposed to like at the

at the contract level. Yeah. Yeah. Yeah. Yeah. Yeah. I I I agree with you. Also, can you can you explain a little bit more about the the logic of supply chain risk? Because I was shocked when like DJI is not on there, Deepseek doesn't seem to be on there, Quen doesn't seem to be on there. Like, this feels like very bad if I'm just trying to think of like my short list for things that I might want to supply chain risk. Like, yeah, it's good. Huawei is on there, but I got 12 other

Unit.

Yeah. Yeah. Unit Tree, I don't think, is on there. Uh, yeah. like like when should the government apply this? Does it matter if it's international? Is there is the national versus international designation important?

Well, one thing I'll say is we don't know everyone on the supply chain risk list because sometimes those designations can occur in classified contexts, but um

but it is the case I think that that the supply chain risk designation if you go and look at the statutory history,

it's really intended for foreign adversary companies. this this is about this is about it's really about China frankly it's really about China um and so it is kind of you know wild to be in a whatever you think of the chip export policy of this administration of you know wanting to sell more Nvidia GPUs to Chinese companies it is kind of wild to have that policy and then also to say that we're going to treat American companies that we don't like um like Chinese companies like like enemies of the state I don't think we should be doing that

can you uh talk to me about what your 2025 was like, what your goals were, what of those goals have been achieved, like give me sort of a scorecard on like AI policy in DC.

Well, you know what I what I wanted to do is I mean, first of all, you know, obviously getting the action plan done uh that was a you know, 18 20our a day kind of thing for four months in my life. So, that was that was an intense sprint and um getting it done is obviously a career highlight for me. I didn't get it done unilaterally. Of course, a lot of people worked on it, but um you know, that was a big part of it. Sort of course correcting from what I thought were some of the excesses of the of the Biden policy. A lot of which was, by the way, this national security inflected control over the labs. So, it does disappoint me to see us sort of regressing back into Biden era mentality, which is what this is.

Um so, there's that is a little bit uh concerning to me, but yeah, like that. And then more broadly um you know I don't I just I want to AGI pill people you know I want people to like feel the feel the like the the importance of what is going on

and um and hopefully uh you know try to

think more strategically about this issue because the thing is is that it can't just be one or two people doing this. It has to be a community of people. So, so walk me through what AGI pilling means uh in March of 2026. Uh

more Mac minis than people on Capitol Hill.

Yeah, I I I mean like I mean like there's terms like software only singularity that robotics will be delayed that there might be Daario uh talks about like the end of the exponential. He doesn't seem necessarily super intelligencepilled. He seems like we will get someone who's 150 IQ and you will be able to run it in a data center like the the what does he say a million geniuses in a data center. It's not it's not one god country of geniuses in a data center. It's not one god that's like

so like time traveling and teleportation like it's not entirely all this crazy sci-fi. It's a little more narrow. So in terms of timelines, expectations, like what are you telling to uh elected officials these days?

Well, I think the very challenging part is that there's the trajectory of the technology and then there's the effect of the technology in the near term, right? So the technology that we have right now is already legitimately science fiction in my mind.

Um but we don't live in a science we don't the the world is not obviously more science fictionesque than it was 3 years ago. We don't have flying cars. We don't live on the moon, right? um we're not doing it and and we don't have Dyson spheres assembled around the sun or whatever. Um and the reason for that is that um it takes a very long time to transform institutions and to to transform the way that organizations are structured. And so again, like that's the important thing is that um to me it's it's it's about the actual winner of the AI competition is going to be the the country the civilization that is like most imaginative at like we went from artisal types of you know um manufacturing to factories to eventually assembly lines.

Um right we we we're going to do that but for the next generation of things and inventing that. What is what is all that? what does it look like? And um you know that's the that's the that's going to be the where the real economic benefits come and the real things that change the world come from. But those take time and they're really conceptually difficult.

How do you think about competition with China with the backdrop of nationalization? I heard this I heard this quote that was like if you nationalized SpaceX a decade ago like you just get NASA. Like you don't get SpaceX if it's owned by the government. It needs to be completely uh independent. And so, uh, with that backdrop, like what does a healthy American AI lab ecosystem look like that can actually go and win on a geopolitical scale?

Well, I think first of all, yes, it's it's got to be it's got to be private companies because they can move faster and be more innovative. Also, very importantly, um, it's about trust, right? like selling if we're going to if we're going to sell these services internationally, we need other governments and foreign companies to trust our companies to trust that they're not just ultimately linked to the US military, right? Because that's the problem is that doing business with China, you sort of know, well, at the end of the day, every company in China is ultimately a military asset. And that diminishes trust in Chinese companies. We're that's something really profound that we're eroding with this action right here.

Um, and that worries me quite a bit. Um, so I've been thinking about that in the context of like sovereign AI throughout Europe and I was just like, look, France didn't need its own Google. They could just use Google. Like it it was fine. But, uh, when you actually take AI seriously, then you start to wonder about, okay, well, does their strategy, as behind as they might be, does it actually make sense?

Yeah. One of one of the things I hope we invent is I open source models I think are are useful and good but I actually think what's what's more important right now is like open-source infrastructure such that more people can sort of like really put their put their own uh imprint on models

uh and uh more people can feel that sovereign experience without having to like build you know multi- gigawatt data centers and whatnot. I hope we can do that. That's a really

like a national compute reserve or something like that

that like that like academic institutions could access.

It could be like that, but also even more than just the compute itself. Also like all the infrastructure that connects the compute and all the like I I think there's something really interesting there and I'm excited to see more people do things like that.

Yeah. What do you think about the concept of like an FDA for new models? you're releasing a new model, like it has to pass a battery of tests or benchmarks that are maintained by a like a federal institution in the same way that when you when you release a new cancer drug, you have to test it on mice first, something like that.

Yeah. I think the problem with that is that it's that the there's too much to test for because the models can do too much. So, we don't know what to test for.

So, we have very clear end points when we're testing a drug in a person, right? we have very clear things that we want to test for. Um, it's much harder to do that in the context of a generalist AI model. Um, so that's one of the things that worries me about licensing in addition to licensing can slow things down and it can regulatory capture is very real. Uh, it's it's a very real thing. Um, I I've always thought of AI as being a little bit more like um like financial services where the regulation is not actually targeted at the products per se. The regulation is more generally targeted at the entities themselves. So we regulate banks. We don't we do regulate loans but we do that by way of regulating banks. We don't like have a there's not a government agency that like approves every single loan that a bank makes. We what we do is we look at the entity of the bank and we say this this is a sound institution. This is a soundly governed institution. That's the kind of thing that I think will be that's I don't know again I don't think we should regulate AI like banks per se but I think that is a useful structural intuition for thinking about AI regulatory uh policy.

Yeah.

Yeah.

Uh Jordy please. How much uh how much attention uh is DC paying to the news out of Square? the the push back from uh or block the push back from Silicon Valley was hey you're blaming this on AI but it doesn't feel like maybe that's a small part of the reason to do this riff but certainly not the entire story but then a bunch of groups are going to use this as you know example number one uh as to yeah changes in in the labor force due to AI

well it's a bad combination because it's all these companies all these tech companies that overhired in the sort of immediate postcoid. Um, and then now they're also the most exposed to um to AI, right? They're they're the ones that are adopting AI the most aggressively. They do the things that AI is best at. There's lots of software engineering. And so like it's both things happening at once, I would say. And that creates an exaggerated sort of funhouse-like effect of of the actual impact of AI on the labor market. Um I'm very skeptical that like we're going to see things like that at like wide you know firm level anytime soon. Um you know in the broader economy. I think we could see it more in the software engineering discipline because there are a lot of big software engineering enterprises that hire too many people right. So like you know I think that's that's that's a very live possibility but DC is DC. People are going to interpret the news for whatever is convenient for them you know. Um, and if you when you play the long game, you have to try to just build credibility over the long term with people. Um, but yeah, I mean, everyone's going to say this is evidence for whatever I'm on about.

Yeah. Uh hypothetically, how do you think something would play out if if like there was a battle on the moon and NASA says, "Hey, we want to buy a bunch of rockets from Blue Origin or SpaceX and we want to go to the moon to fight this war on the moon."

And the private company CEOs say like, "No, we're not interested in in that. We want to just launch more satellites." like it feels like another analogy for like the nationalization conversation. And maybe maybe it's just a weird time because

I I feel like in previous eras, uh, a lot of the, you know, you go back to like the dollar a year men, a lot of a lot of American industrialists sort of like stepped up to the charge and were like fans of the government and now there's like much more uh it feels like we're more divided than ever. Um but but like what is the correct way to marshall a cornered resource as a government?

Well, I would think in practice that the the the the companies the SpaceX's and Blue Origins of the world would actually eagerly help the government there. But let's take your hypothetical and say that they didn't.

Um there's an authority called the Defense Production Act, Title One.

Yeah. uh which is a there's something called priorities authority and the government can quite literally just put itself at the front of the line for any launch of any rocket and that's what I would suggest they do in that situation. Yes. Um rather than commandeer the rockets themselves. That's kind of what I would suggest.

That makes sense. So defense production act but it requires extraordinary circumstances.

Uh the president can do it basically whenever he wants is find the president can make a unilateral finding. Um the other thing I'll say that's funny about all this is that like you know the one of the things I was enthusiastic about putting in the action plan was a little notice provision of the action plan is basically saying in the event of a national emergency DoD needs to have DoD needs to know with the hyperscalers how it's going to commandeer the data centers

uh for like if it needs like tons of compute in some sort of national emergency that's in the action plan. So, like I'm not here saying like total libert, you know, whatever, like no government involvement. That's not what I'm saying. I'm just saying that like anthropic has a right to exist.

Uh if they if they don't do business with the government, we shouldn't kill them. That's all I'm saying.

Yeah. Yeah. I think the Wall Street Journal called it like a fight about vibes. And and I think most people are are not at the point where they say, "Yes, AI is actually nukes." And yes, like the like one company should should like, you know, like they should be destroyed if they don't give it all up. like that. That's a very uh middle of the road position that I think a lot of people agree on.

Yeah, 100%.

Yeah, that makes sense. Um Jordy, anything else?

Not for now.

Well, thanks so much for coming on.

Yeah, good to see you.

Hey guys, thanks so much. Good to see you.

Great. We'll talk to you soon. Cheers.

Goodbye.

Let me tell you about vibe.co. We're DDC brands, B2B startups, and AI companies advertise on streaming TV, pick channels, target audiences, and measure sales just like on Meta. Uh another

Tyler Dean said that the reason we don't have Dyson spheres is because institutions uh aren't aren't quick to adapt. So you have no more excuses not to build a Dyson sphere.

Yes. Make it happen.

This is a high agency organization. We we have we have very little uh institutional overhead.

Uh

no no in in the limit. I believe it. I mean it's like a it's a it's a long chain of events that that require all of these different uh diffusions. um to happen. Um but

we got to ask uh Jared his Dyson sphere timeline.

Yes. Yes. I mean, obviously the big question is moon versus Mars, but I like uh I like getting up to speed on the plan for the Dyson sphere. That should be firmly within the the hundredyear plan for NASA. I believe it. Let me tell you about the New York Stock Exchange. Want to change the world? Raise capital at the New York Stock Exchange. Just do it. And let me tell you about Graphite code review for the Age of AI. Graphite