RAND researcher on DeepSeek's real lesson, export control failures, and why techies should move to DC

Jun 13, 2025 · Full transcript · This transcript is auto-generated and may contain errors.

Featuring Lennart Heim

geopolitics. How are you doing? Hey, good to see you. I'm doing great. How are you guys? Uh we're great. Uh would you mind uh kicking us off with a little introduction on yourself and what you do day-to-day? Sure. Yeah. Leonard Time calling in from Washington DC. Um I'm a what do I call myself?

Researcher working on AI policy governance background electric engineering. I think part of my mission is to to bridge the technical world and the policy world and that's what I've been doing before before AI was cool the last four or five years actually. Yeah. And now AI is cool and since then I've been ever busy.

It is extremely cool. It's driving the news cycle every single day. Yeah. Pretty much the coolest technology. Uh uh do a little uh level setting for us. What is uh the dominant story? How is the race between the US and China shaping up uh on the AI front? Um, what are the interesting threads to pull on?

What are the most important companies to be tracking right now? Yeah, well that's a tough question where to start. I think it's like still the deep sea trauma is like still still there, still dominant, right? I think that was like this initial story. The US is like so far ahead.

there is no Chinese model being closed and then deep sea came out and definitely had a big impact in DC but I guess also in the broader world as we saw in the stock market and I think it's pretty dominant here still in DC you know like in DC always talk about things which Silicon Valley did half a year ago even a year ago so that's still part of the dominant discussion and then export controls I guess is also a big topic right now right you got the US China trade talks going on right now where they're negotiating beyond export controls which impact AI but export controls famously had an impact big impact on AI in China.

Yeah. So, let's talk about deepseek. Uh, one of the narratives that came out of the Deep Seek story was, uh, all the American labs are completely cooked. Uh, and then we got Jevans Paradox and Nvidia popped back up and and and all the labs seem to be making tons of money. OpenAI crossed$10 billion in revenue run rate.

Uh, and things seem to be going well. But what what are the wrong lessons from Deepseek? What are people getting wrong about the Deep Seek story? How relevant is Deep Seek? We're still hearing about it as being important geopolitically in some of these jump ball nations that aren't quite allies, aren't quite uh rivals.

Um, and they might be interested in building a an open source stack on top of Huawei Ascend, uh, DeepSeek, Manis, that type of uh stack. Uh, what what are the the wrong lessons that people are learning from the Deepseek story? being deepseek made such a big noise because we're just lacking a good reference class.

They just came out pretty impressive model. It was openly available and we had a paper attached to it, right? Then it came out also saying it cost us 6 million. Does anyone of us know how much it cost OpenAI or like Enthropic or any of these other companies what to do? We don't, right?

And I think this just really made headline news. We like sometimes get like Sam Oldman going on stage saying I think at some point he said it costed him 100 million to do GB4. Y but 100 million what? Just a training compute? Is it is it just like buying the the hardware? Probably not. Is it paying the staff engineers?

So like the reference class is like pretty important here. But then it costed them 100 million what like this not two years ago even more and now deep6 million. So I think people just like we're lacking.

It's like wow this sounds impressive but like they had just no reference class what's actually going on within the companies. And I think a couple of weeks later Andropic CEO Dario put out a blog post saying like actually we can train to the same efficiency. And I think this was like for people monitoring the market.

Not surprising. We just like know the trend lines and we know it gets like vastly cheaper every single goddamn day to train such a system. So to that somewhat surprising, but like you look at this, you don't have a reference class and then it just looks really impressing, right? Yeah.

I I I think a lot of the a lot of the more technical analysts in the AI world uh were excited to see the open source results, excited to hear about FP8 training. For example, we talked to Dylan Patel. He was telling us, "Oh, well, OpenAI was doing FBA training years ago.

They just didn't tell anyone because it was kind of like their edge, I guess. " But all the labs had kind of figured that stuff out. They just hadn't told everyone uh exactly how they did. Deep Sea came out there and told everyone and sounded like they really had jumped ahead.

Instead, it seemed like they just jumped to the frontier, had not fully advanced beyond the frontier necessarily, um, but then done it in an open source way. What was the read on the strategy to open source?

It felt like there was a very deliberate uh like that launch if it had been closed source, $20 a month, you know, very difficult to access, rate limited, lots of safety harness around it. Um that we would have been telling a very different story about DeepSync if they had said, "Oh, this is this cost a trillion dollars.

It took us forever. " Instead, it seemed like there were like a number of superlative and viral hooks that really captured the American news media and drove a big uh cycle. How much of that was intentional versus how much of that was just uh like a like a random byproduct?

Yeah, I think as you're saying there were just like so many check boxes that ticked. It was didn't during the first week of Trump. It was an open release model weight, right? It was surprisingly so cheap. It's coming out of China. Y. Oh, and it was like the first public reasoning model. Yeah. Right.

So like all of the things happening at the same time. So like wherever you're coming from, DC excited because it's out of China, but we thought we fixed it. We did export controls. Right. So they can barely oh it's an open source model, right?

And everybody looking at like actually how does this reason work because OpenI did it before but they didn't tell us, right? So again just ticking all of these boxes. How much of this was deliberate?

Um I don't think this is like a state coordinated effort that they said like this is how we're going to do I think DeepSync had like a pretty open commitment to like releasing models publicly before, right? Um, so they were just been following that.

Um, it just it was just like to some degree if there's a marketing strategy behind it, they did it right. I just expect most of the things are just like how the tech works, right? And they were early on reasoning. They were like one of the first companies doing it publicly and then it just made a ton of noise.

And again, and even though chatbt had a reasoning model available, it was only available at a premium tier. And so for a lot of AI consumers, I think their first interaction with a reasoning model was Deepseek or the Deepseek app uh which was just a very interesting kind of uh like go to market strategy essentially.

Um and also seeing the chain of thought, right? Yeah. Like the reasoning which we didn't see before, right? Like this changed a ton. Yeah. Even a cost thing a lot of a lot of financial uh folks you I mean you mentioned that Silicon Valley was excited about open source.

uh Washington was you know interested in the story because of the China angle but also the financeers the Nvidia investors were interested in the financial impact of this uh so yeah it was fascinating I got it wrong right yeah yeah yeah for sure my take uh how much of the existence of deepseek do you ascribe to China's positioning around high frequency trading oh that's an interesting one I mean like deepseek sits in the the hedge fund right yes and they were initially buying 10,000 A100s or 5,000 A100s in 2022 I think before the export controls.

Yes. And tons of GPUs always still go to hedge funds because they well we don't exactly know what they're doing but they definitely like accelerated computing and partially machine yeah machine learning bunch of other stuff. Are they all training large language models? Probably not.

And I think then one theory I've heard I'm not sure how true it is that was like the like the crackdown in China on the tech sector right including high frequency trading hedge funds and more.

this made them then pivot to AI is like well if we can't trade on the market anymore what should we do with all of these GPUs or it turns out AI is this new hot cool thing um and then I think the other thing which is important here when you have these hedge funds they do like really lowlevel optimizations like writing their own kernels they just they don't do like just high level pietorrch they go down there and try to get like every single point of utilization out of the models uh out of the hardware and this helped them right we know like deep has pretty crack engineers I think this was like pretty obvious once we saw like the code Um, and this then helped them to then train a pretty pretty decent good model.

Let's talk about Llama. The Llama 4 launch was a little rocky. Obviously, there was allegations of some benchmark o just overoptimization. It seems like a lot of the the the big frontier labs or the independent labs kind of just don't even care about benchmarks anymore.

It's all about big model smell and and just uh the actual performance of the products.

Um, at the same time, I've been really interested in this narrative of llama as a very important American export, as a counter to the Huawei SN, DeepSeek, Manis stack that other uh other countries might be comping because yes, they might be interested in doing a deal with Stargate and building some and buying a whole bunch of Nvidia chips, but what are they going to put on top of it?

Are they going to maybe they don't want to go all the way with OpenAI or Anthropic? llama seems like an interesting uh wedge there. How important is that? What what is your reaction to the the metal llama strategy and how they're being perceived in Washington right now? I would be curious on your guys takes on this.

I'm like still unsure how how I feel about this. I mean, let's just start with Deep Seek. I think it's an interesting moment in time when all of the leading US models are just behind closed doors and the best open model is coming out of China.

I'm worried about it because this model is partially biased and it lies about the CCP. It has some Chinese propaganda in it, right? Why is this bad? Imagine you're like some developer in another country. You want to build a new education app.

You're using this model and then you know you teach your kids Chinese propaganda. I think that's great, right? I think it's even worse if you think about sleep agents. People know about this paper from a propic, right? You can basically have like sleep agents sitting in this model.

When you talk about a certain topic, it starts giving you code with vulnerabilities. but this can apply to anything else and it's like really hard to look for it because you don't know what to look for, right? Um so that's worrying. I don't think that's the case with Deep Seek right now.

But going forward in the future, the case just like if you use an open model, you don't really know what's in there, what's hiding in there, right? This goes for all companies, but I think we got more reason to mistrust um certain other companies than for example Meta.

Yeah, it kind of makes me think that uh it kind of makes me think that Anthropic might have a real business on their hands with uh AI safety.

Obviously, they they branded it all around, you know, oh, like, you know, we want to prevent AGI doom and fast takeoffs, but if you just think about like, hey, we're going to launch this model in our organization and we want to make sure that the code that it writes doesn't have random vulnerabilities, that feels like a a very important like AI safety product almost to to to evaluate these models, test them, and essentially root out sleeper agents.

And so I I've all of a sudden flipped to like very excited about the work that Anthropic is doing on that front. Uh Jordan, do you have any questions? I think this could be like a real benefit to them, right? And and that's like that's kind of how it started.

Reinforcement learning from human feedback gave chise was like some people trying to make the models more useful but also less harmful.

And turns out they turn out to be a bunch more useful and I guess like we will see more about this going forward and particularly for code or even if government wants to adopt these models. Yep. Right. You just better be goddamn sure. Um my question was around uh news from the last 24 hours basically.

Um, and specifically people talk about the application of of AI in warfare, but it's always at this sort of like very generalized high level of like, oh, we're using AI and and and I and I can easily imagine, you know, um, simulation scenarios, data collection, processing, all that stuff like kind of going up to the point of a conflict and then autonomous, you know, drones and things like that.

But what what are the ways in which um you've understood AI to be used in in an actual like conflict scenario? Sure. I mean I mean look I don't do narrow AI or drones to some extent. I expect there's like a bunch of just like computer vision applications and more.

But if you look at like frontier like if you look at large language models I think one big application just intelligence. Mhm. Right. Literally my job on a day-to-day basis is like open source intelligence. I want to know what's going on around the world.

Nobody's telling me well except Deepseek they sometimes publish a paper but like I don't know what the US companies are doing I don't know what's going on in China and the semiconductor industry so just a bunch of data I feed into my LLMs and ask them what's going on right and for intelligence operations it's the same we saw in the last 24 hours pretty targeted attacks Israel is famously known for pretty good intelligence operations and I think they exactly knew who they were hitting um based on these types of operations right and if you just have like a nice backdo smartphone and you just like feed all of the chat transcript in an LLM can make sense out of this, right?

You can do it at a larger way bigger scale to identify the targets and also identify patterns which again might be harder for humans to do. At least you can like auto like well no like augment humans. Yeah, I mean they used to do that with like keyword search. Okay, like let's we have a ton of data.

Who's talking about bombs? Who's talking about attacks? And uh people would use code words and LLMs are really good at deciphering that type of stuff. Uh fascinating. give us a state of the union on export controls for AI chips. Um what is uh what's expected on the horizon?

How has the landscape changed over the last you know couple months at least? Ray um I mean when Biden left the office there was like two two big rules coming out. The one rule was I think many people are tracking the foundry due to diligence rule.

It was a big hiccup where Huawei produced a bunch of chips over TSMC in Taiwan which they were not supposed to.

Y they did it via shell company 3 million chips that's quite a lot that's way more than they produce that's what they got their hands on and government reacted with a new rule basically telling TSMC like godamn you got to be do better due to diligence and like check who your customer is and make sure they don't end up with Huawei because they broke two rules no chips for Huawei at all and please no big AI chips for any entity in China right so this was big move that's still the case I applaud that I think this was a pretty sensible move um to just make sure they can't again get their access to that many chips.

And the other one was the diffusion framework which I guess you probably discussed at some point before right like broader controls to some all of the world deciding to some degree dealing with the Gulf States dealing dealing with a bunch of in between countries um where people were worried about chip smuggling but also data centers being built there.

Malaysia is being such a country where a bunch of data centers being built there. really interesting reporting today where I think the Wall Street Journal reported people were like going back and forth on an airplane with hard drives to train models over Malaysia. I saw that. We were gonna read that.

Wild which again speaks to the efforts they're going to to get access to AI chips and how creative they get. And also from my point of view, I'm just getting surprised every single time of um how creative they get to get access to AI chips. Anyway, that's what they did, but this one is not being enforced right now.

It's not officially revoked yet. We're still waiting for it to officially be gone, but at least commerce Latnik and Kesler put out a statement saying they're not enforcing it. They will come up with a replacement rule.

And this replacement rule then needs to deal with well, yeah, basically all the countries except the clearly competitors, China, Russia, North Korea, they're all controlled. They will continue being controlled.

But what about all the other countries like the Gulf States, like countries with a bunch of smuggling going on? And yeah, me as somebody from outside the government is waiting to see more moves there. Yep. Uh I have one last question. Good Jordy.

Um uh today a bunch of technologists join the Army Reserve in the newly formed Attachment 2011, Executive Innovation Corps. Can you tell me a little bit about how you're seeing technologists and the tech community plug in in DC? Um what is needed there? What do you want to see on the horizon?

This seems like a step in the right direction, but it feels like a job's not finished moment. Yeah, I think that's the right audience. Like a call to all the techies to come to DC. Y um the pay is not better here. It's also pretty damn hard. We're living on a swamp here. But I think there's a strong case for impact.

So as a techie, I think like when I talk to like young undergrads like oh in DC like I just did like a bachelor's in machine learning. What do I know?

was like it's plausible you're the smartest person in the room in many times in DC which is way harder than in San Francisco people don't know a policy but sometimes can be a benefit to be like bit like a bit ignorant about like how the game is being played just like look here just here's just the math works out right yeah so like I love techies coming to government just like crunching numbers I think there should sometimes be less opinionated just like look here are the numbers we crunch them could suggest this could suggest that but just like grounding it like an analysis like an objective things when I think about China and AI chips I can crunch the numbers on how fast the chips are how many they're producing and in contrast how fast American chips are and how many we're producing right and this is just a good analysis to be run how this then impacts the broader scheme of the AI ecosystem and who's winning the AI race and what it's a different question but we can then take it from there right so if I see stuff like this be it in again in the military but also just generally in government particularly anything around export controls anything around just understanding of what the hell is going on.

Just like having somebody explain like hey deep sea ging model what is everybody talked about distillation like really just explaining the basics is like yeah things people should do and I think people with a CS like an undergrad degree in computer science they can do it and I would love to have them there.

I'm hiring by the way so if anybody's listening please join me apply. Amazing. Yeah, in general, I think it's good that we don't just have, you know, these sort of high-profile technologists only joining Doge.

Like, there's other things that are important in the government besides just, you know, taking a sledgehammer or scalpel to different different parts of the government. So, this government getting stuff wrong. I think the most famous example is the initial Xbox controls 2022 were trying to control AI chips. Yeah.

And they use two parameters, technical parameters, which probably 99% of TC won't understand.

total processing performance, how many flops it got, and then also the interconnect bandwidth, how fast it can talk to other chips, and they messed something up there, then Nvidia could basically design a new chip sitting right at the threshold, but still having high flops, which is basically as good as the other chips.

And that's what Deep Seek used. And then it took them a damn year to fix it. And I can confidently say they knew a month later they should do better there. And that's just the thing where like techies, and you didn't need to be the smartest person. I've never trained a really big model.

You can just tell by the specs like, "Guys, something something is off here.

" and or even just ask difference even just asking like you know one of the foundation model labs like open AI I'm sure they could tell you like hey like do not restrict these three parameters um and and it just seems like there was not enough uh back to answer Nvidia doesn't answer these questions but I think yeah Nvidia wouldn't but I mean you could imagine that plenty of the labs that will compete with Chinese labs would love to answer that question right because they have a huge incentive not to be out competed by deepseek right y um and so you got to but maybe they talk their own book too much you know there's always dynamic ICS, but at least having a technologist in the room would be beneficial.

Anyway, this was fantastic. Thank you so much for hopping on. We'd love to have you back. More news. We will talk to you soon. Have a great weekend. Kiss you, too. Talk to you soon. Um, closing out the news for the week. We have Let's take it over to Tyler. How are the horse electrolytes?

Okay, so I haven't tried it yet. Scoop, I think. Uh, so it says a scoop. One scoop is for light exercise. Okay. Uh, so for I think probably a mediumsized horse, I assume. Okay. So, uh, it's I'm looking at the ingredients. It's there's an incredible amount. It's like all salt. It's all of course. Of course.

You have to at least chat GPT that you're not like taking a deadly dose of salt right now. I think it's probably fine. Probably fine. We'll go with probably fine. That's the horse mentality. Smells smells good. Does it have like a lemon lime taste to it? Is it apple? Oh. strong. It's like I'm drinking salt water. Yeah.

Amazing. I think you might have to water. The lemon. There's lemon in there. There's some There's definitely some flavor. I mean, it says apple flavor. I'm not really getting any apple notes. Yes. Okay. Well, congratulations. You did it in under an hour and a half. You did it well.

You have a whole summer to finish the whole bucket. Yeah. Wait, what was the final time? An hour and 15 minutes. I think hour 19. Hour 19. That's still way faster than Chad GBT clocked it. I I took a picture of that of that Anderal Lego set build time.

I said, "How long do you think it will take to put together this pretty simple 619 piece Lego set a beginner would take three to four hours. An intermediate builder 2 to three hours and an experienced LEGO fan 1. 5 to two hours. You did it better than they did.

" So, I think you were only wrong in your own you overestimated yourself, but you should always the average Lego builder. So, congrats for that. Enjoy the horse. I'm sweating a lot. I think this will help me, you know, bring me back to Yeah. Yeah. You you're you're sweating out the electrolytes. You need to refresh.

This makes a lot of sense. This is great. Everybody in tech is always talking about going to the horse doctor, but never talking about uh getting horse electrolytes over the counter. Yes. Yes. The horse electrolytes. That's where that's where you want to be taking.

Uh well, the scale the scale deal has officially closed. Met is paying 14. 3 billion to buy 49% of scale, the industry's leading data dealer. Alex Wang wrote us wrote a note to scale employees. He said, "When I founded scale in 2016, it was amidst some of the early AI breakthroughs. Deepmind had just released.

Alph Go and Google had just released TensorFlow. It was incredibly early. It was clear even then that data was the lifeblood of AI systems and that was the inspiration behind starting scale. Since then, the journey has been extra extraordinary.

We've grown to over 1500 people, become a trusted partner for model builders, enterprises, and governments building and deploying the smartest AI tools and applications. Scale is now one of the most impactful companies in the world.

He he closes by saying, "Today's investment also allows us to give back in recognition of your hard work and dedication to scale over the past several years.

The proceeds from Meta's investment will be distributed to those of you who are shareholders, invested equity holders while maintaining the opportunity to continue participating in our future growth as ongoing equity holders. The exceptional team here has been key to our success.

So, I'm thrilled to be able to return the favor with this meaningful liquidity distribution. So, congratulations to everyone. Let's give it up. Hit the gong for liquidity events. John Lee Marie, Eric Torrenberg, lots of winners. Lots of winners, lots of lots of friends of the show.

Um, uh, Ben Thompson broke it down in a piece uh, talking about Meta and Scale AI. The most obvious explanation for the structure is that Meta wants to avoid the sort of antitrust scrutiny that would attend to an acquisition. This explanation applies broadly.

Big tech generally and Meta specifically are under massive scrutiny in terms of acquisitions including Meta going to trial for acquisitions made over a decade ago and narrowly scale AAI is a supplier for not just Meta but also Meta's competitors.

The problem is that it's is is that just Meta isn't acquiring Scale AAI doesn't mean the deal can't be scrutinized. Indeed, section seven of the Clayton Act is explicit about covering only partial acquisitions. So the FTC can still review this, but it'll probably be a little bit easier.

And so that's what they're gunning for. Uh these guidelines were reaffirmed by the Trump administration earlier this year. So a lot of people were expecting it to be complete game on anyone can acquire anything. That hasn't been the case. Uh we heard about this on the campaign trail.

A lot of folks in the Trump administration were signaling that Lena Khan made some good points and they were not going to completely reverse course even if they were going to replace her. Uh even beyond that says Ben Thompson. However, and and perhaps this makes the regulators point.

It seems likely that this will kill scale scale AI's business with the big labs in particular who would be concerned about their data ending up in the hands of Meta, which is to say that this is a deal that would destroy the enterprise value of that Meta is theoretically investing in.

And so that will be something that will be debated in these merger guidelines when the FTC ultimately reviews it. But it's looking pretty good because they close pretty quickly.

uh as it is many of the labs particularly open AAI have been bringing in more of their data work inhouse which helped contribute to scale AI missing revenue targets last year from the information they reported this uh I mean they still put up amazing numbers and so obviously a lot of stuff was working but you could imagine that some of the labs were starting to bring these if they if they really are believers and hey we got to own our data collection and data processing forever let's build that function inhouse and I think they started to do that or at least partner with other competitors in the space.

And so Ben Thompson continues to write about Meta's reset. Uh in fact, the more pertinent angle to discuss this deal is probably Llama. Llama 4 was widely viewed as a disappointing model, and a big portion of the original Llama team has since left Meta.

Um and uh and Ben Thompson thought that Mark Zuckerberg's media tour about AI seemed a bit forced and unfocused and had heard through the grapevine that Zuckerberg was considering a wholesale reset of the company's AI efforts.

uh with the biggest priority being the search for a new AI chief to take firmer control over the company effort. In that uh that in the end may be the AAM's razor explanation for the deal.

This is a very expensive aqua hire Alexander Wang, Scale AI's co-founder and CEO with the price softened a bit by virtue of paying Scale AI for work that Meta was going to have the company do anyway. Wang isn't a researcher, but he's an executive and leader who is familiar with the space.

And Meta needs leadership in addition to talent. He's a deals guy. He's a deals guy. go to Meta and continue to be dealm and so uh Ben Thompson closes with a little bit on sustaining versus disruptive innovation.

The other reason to believe meta versus Google comes down to the difference between disruptive and sustaining innovations. The late professor Clayton Christensen described the difference uh and we're familiar with that.

So the question of whether generative AI is sustaining or disruptive innovation for Google remains uncertain.

two years after uh Ben Thompson raised the question obviously Google has tremendous AI capabilities both in terms of infrastructure and research and generative AI is a sustaining innovation for its display advertising business and its cloud business at the same time the long-term questions around search monetization remain as pertinent as ever and this is a question I want to debate with Ben when he comes on the show um there is this there there's this idea so it came out earlier that Cinder Pai had not read the innovator's dilemma and people were saying oh like he should have because this is the this is the classic example of Google being disrupted by a new technology, generative AI.

Uh Ben Thompson said it doesn't matter. The whole point of disruptive innovation is that is structural and therefore uh it doesn't matter if you've read the book, there's nothing you can do about it. That's kind of the point of disruptive innovation. You're being disrupted.

But my question is, what if Google had chat bots before Chat GBT? What if they had launched the Gemini app first and been the first to market and gained all of that like mimetic attention? Yeah.

Well, the disruptive innovation framework would say that Google would be in a tough spot because they wouldn't be able to monetize Gemini as quickly as they were losing volume in search. And so, search revenues would decline faster than they could make it back up on the new model.

And so, what we're seeing right now is JT GPT is growing. They're 10 billion in revenue. But I don't think we're seeing a fall-off in search. It seems additive.

And so the weirdest thing is that is that if you ran back the simulation and you did put Google in a position like let's just say Google owned 100% of chatbt like the the combined enterprise value would not be the stock would not be in the trash as they disrupted themselves.

And so there's this interesting question of like there's this fear that your earnings will drop as you pivot away from searchbased display advertising search advertising to a subscription chatbot model.

But I don't know if that's actually playing out because the overall market the combined revenues of Google and open AI seem to be not declining like we're not in this nadier. We're not in this in in this trough of like disruption necessarily.

Meta Meta and Facebook had the luxury of you know acquiring Instagram starting to monetize Instagram still being able to grow Facebook at that time. I mean maybe there's a little bit in deceleration.

And I remember when when when Meta was was like in the process of somewhat being disrupted by Tik Tok, they launched reels and there was questions about reels monetization because if people spend more time on reels than than the normal Instagram feed and there aren't as many ads in reels as there are in the feed, well then even though the user time might be going up, your revenue might be going down.

And so that was a fear that was something that was weighing on the stock. Ultimately, it was not borne out. But uh I don't I I'm just not seeing the data, but I want to dig into it more.

I want to talk to Ben about it because uh there's clearly I I I I I think I'm wrong but for diff for reasons I'm unaware of and so I'm I've I've been digging into that.

So um uh Ben goes on to write meta however does not have a search business to potentially disrupt and a whole host of ways to leverage generative AI across his business.

This is what I was talking about with Joe Weisenthal talking about even if Meta doesn't get like a banger AI first consumer product out into the market, they just have so many AI generative AI workloads, whether that's uh sorting ads, generating ads, internal profanity filtering, you know, all all their different content moderation.

Also, I think it's easy to see how meta will benefit from more in in 10 years from now, more content will be produced, which means better the more good content will be produced even if it looks like AI slop right now.

And when you think about the business holistically and meta as effectively, you know, an entertainment company, will people can want more and more entertainment? You know, this is what Ben Thompson was writing about. He said uh meta strategy is to commoditize your compliments.

You you you you a generative AI is a complement to meta's platforms. And so yes, if people are using generative AI to generate slob, that's still fine as long as people are enjoying it.

But uh ultimately people will use it as a different tool for video editing and sound generation and drop out the background noise and replace my background with a with a cinematic, you know, vista or something like that.

So um uh so for Zuckerberg and company Ben Thompson thinks AI is absolutely a sustaining technology which is why it ultimately makes sense to spend whatever is necessary to get the company moving in the right direction.

It's also worth noting that Meta doesn't really have any alternatives other than continuing to invest. Google is a competitor for advertising the most financially compelling use case today. OpenAI is a competitor for consumer attention. Meta's most important scarce resource.

Uh, I suppose Anthropic would be a potential partner, but per the point above, that seems like the ultimate culture culture clash.

If anything, the most compatible partners for Meta would be Microsoft and Apple, but the former is obviously tied up with OpenAI, and the latter, well, never say never forever, but definitely never for now. Great ending to Ben Thompson's update. That's right.

Well, we have a video that we can pull up uh that was that hit the timeline earlier this morning. Let's review it. Um, I was able to confront uh PT. Oh, yeah. And I think we should play. Saw him on the street. Let's play it. Saw him on the street. Is it true that competition is for losers, sir?

Is it true that you encourage young people to drop out of university? The background. Mr. Is it true that your funds have doubledigit GP commits? Sir, how can you say you're contrarian when everyone is contrarian? Mr. Teal, is it possible for there to be too much venture capital? Never. Mr.

Teal, have you been a victim of other VCs gaming the Midas list? Absolutely. Sir, why do you refuse to pick a favorite Rene Gerard book? Mr. Teal, what do you have to say to people that think they're a lottery ticket? Mr. Teal, must all venture returns be power law distributed?

Sir, why do you think why questions are overdetermined? Oh man. Well, really asking the pressing important questions, the questions that, you know, he clearly doesn't want to answer.

Um, but uh I'm sure over time, you know, we'll we'll uh we'll have him on the show and we'll try to we'll try to pull some of those those answers out. Lots of fun. Uh, I mean the last story we should cover before we drop off, uh, Chinese AI companies dodge US chip curbs by flying suitcases of hard drives abroad.

Leonard on the show mentioned this. Engineers are carrying data to countries where Nvidia chips are available. Frustrating Washington's aims. No one really anticipated this, but uh, apparently Chinese engineers are transporting hard drives with AI training data to Malaysia.

So they they they they take all their training data, they load it onto a bunch of hard drives, throw those in suitcases, fly to Malaysia, and then do the training run there, and then take the weights back. And so that's like the chips aren't even moving.

Like the whole idea of of exporting chips is is now irrelevant if you can do the run locally in a different country. So this this AI diffusion story is going to become so much more complex.

It's like it's just going to be so complicated forever because there's going to be shell companies and if people are willing to do this, anything's on the table. The next Deep Seek Run is going to be uh it's going to be a movie. It's going to be a movie. Anyway, we will be following it here.

Uh we hope you have a fantastic weekend and we will see you on Monday. I cannot wait, John. Cannot wait for Monday. Be back at this table. Two more sleeps. Two more sleeps. Two more sleeps or three. We're not going live on on Sunday. We might we might we might if there's if there's emergency we might go live.

You never know. Uh but until then thank you for tuning in folks.