Will Brown breaks down Meta's AI talent raid and what Zuckerberg's Superintelligence team actually means
Jun 30, 2025 · Full transcript · This transcript is auto-generated and may contain errors.
Featuring Will Brown
content. Um, Killers of the Flower Moon, Oscars, best picture winner, Kota, literary adaptation, Pachenko, and the spy drama Slow Horses. Yeah, I tried to watch Slow Horses. It was pretty good.
Um, it's its few mainstream successes include sports comedy Ted Lasso, sci-fi workplace drama Severance, and straight to streaming film The Gorge or the Gorge. Uh, we're into entertainment to tell great stories and we want it to be a great business as well, says Tim Cook.
He told Variety, an Apple TV Plus spokeswoman declined to comment. Analysts have questioned why Apple spends billions annually on a business so far field from its core consumer electronics operation, even if it's barely a blip for the company worth more than three trillion.
The strategic value it brings is sufficiently mysterious for people not to talk about it very much. Says Craig Moffett at Moffett Na Nathansson, good buddy at Ben Thompson's. Uh, Apple TV Plus cost $10 a month.
And yeah, so a lot of people think that the Apple TV Plus like production team is to like take take pressure off of the like the 30% Apple tax discussion. Anyway, we have our first guest of the show. Will is in the studio. Welcome back to TVPN. How are you doing? How's it going? Congratulations on the trade deal.
Thanks for having me back. Yeah. What superc car did you buy? Did you go Bugatti Chiron or you go F80? I mean we we hear that the the numbers are huge right now for folks like you. I mean I'm taking a plane more so like that was a lot of luck. Yeah. Yeah. Yeah. Yeah. Yeah.
People say you know fly you know we wanted flying cars. We got the fact that you would imply that Will drives himself is offensive. Offensive. Um I mean Yeah, lot of Whimo. Whimo for the first time recently.
I' like I'd been to like SF, but like I hadn't been around SF proper that much since the the Whimo craze took off. And so like that's been like the the car experience really. Just like Sure. self-driving cars. Damn. Yeah. So, are you in SF full-time now? Uh, still back and forth New York.
I'm like kind of slowly starting to spend more time in SF. It's on brand. You're decentralized. You're decentralizing AI. It's it's it's a perfect fit. Perfect fit. Uh anyway, did you touch grass at all this weekend or were you locked into the timeline? It's a lot of stuff was happening on the timeline.
I did I mean I'm in New York right now. I did some New York stuff uh on Saturday, but yeah, I mean it's been it's been good. Lots of uh drama intrigue as usual. Lots of drama. Um the trade deals. I know I've been tuning in to earlier. I know you guys have gone over all the the major roster updates.
Um but yeah, it's uh exciting times. Open AI, open model soon hopefully, but not this week. Um yeah, so yeah, I mean the first place I want to start with is um the actual trade deals. Did do any of these people jump out to you as particularly interesting? Have they been on your radar? Have you met any of these folks?
Can you give us more context on uh how how academic are they? How product oriented are they? It does it's it feels like this is purely research you know uh beefing up of the research team and meta kind of has products solved. Is that the right model to think about this?
What how can you characterize like the shape of the team that talks about even um there's thousands of really great software engineers that work in AI. Sure. And but what you know even kind of getting a sense of like how many people was was Zuck really going after? Was it 250? Was it a hundred?
The list that came out today is you know somewhere somewhere I don't know there's like 20 people on it something like that. Right. Yeah. Yeah. I mean it's a great list.
It's a lot of like very senior senior researchers who know who have like been through some like real product cycles who like I think the goal is people who can bridge the gap from research to product.
Um because like like one thing Meta has a strength of but I think doesn't translate in like this era is they have a lot of really good like academic researchers like Yan Mcun's whole org has a lot of very talented very capable researchers who write important papers but it's the kind of stuff that doesn't really turn into exciting products.
It's like more betting five or 10 years out, how do we want to do things eventually versus like going through a pre-training cycle for a Gemini or a Claude or an OpenAI model like that has a different set of considerations.
And so this to me looks like they want like 10 15 great people who have been through those research to product cycles in terms of the model being the product.
Um, and it seems like they probably like they got uh Alex um and then supposedly Nat and or Daniel uh to kind of handle a little more of the not the business side but producty business stuff. Yeah. It's a management role. Yeah. Yeah. Yeah. What what is your take on Yan Lun right now?
How should we think about his position in the AI world? There's uh I I you know I see these memes of like he's a non-believer. He doesn't believe in like super intelligence and like AI god. He's a little more practical.
It seems like he's been he's been right about some things in the sense that like we haven't had we we're not feeling a fast takeoff right now. So I think you got to give him some credit for that.
At the same time, like there's Ben Thompson's been criticizing on Lacun a little bit for not driving the the product side of the business.
It's like it's one thing to say that hey we're not going to go straight shot to AGI god but then the the the downstream implication of that is that if you believe that we're going to be living in like B2B SAS world of AI implementation and productiz productization then like go do that and like make sure that the models work really well for business and so uh I don't know what is your take how would you how would you describe Yan Lun in his history and kind of where he sits in the organization to like a layman Sure.
Yeah. So, I think people overindex on thinking of Yan as the meta guy like too much. Sure. Um I think it's a shame that they didn't necessarily have someone in the llama or who was as visible as him because he was not involved with llama at all.
Like fair fundamentally research is like a whole other group that they mostly write papers and do very academic work. Yan lun's work is very academic. Um, and it's really like it's I think he's like I think on one hand he like is right in the sense of like we don't have the full picture yet.
Like we don't have full-on AGI that can like do a human's job forever. And I think that's kind of the drum he's beating is like there's more work to do. There's more science to do. It's not just productize the current set of techniques um in terms of like the end of AI forever.
Um, but his goal is not to do the productization. Like he like the org he's in at Meta is really isolated from product. Um, he's there to kind of boost the uh reputation of the org academically. Um, as well as to be able to potentially advise on other stuff that's more productive.
But his job is not to like go train a business model to train a model that is going to be used by businesses.
And like I think people like that or fair the friends I have who are there like are there because they essentially wanted to be a professor but they want tech company resources and not to not have to teach and that is essentially what that or is yeah does that lead to better recruiting on campus essentially like you can go get researchers because Yan Lun can come do a really powerful lecture Maybe, but I think of it more as like the same way that Zuck did Metaverse stuff.
Zuck is very willing to make 10-year bets. Yeah. Um and so that org for Meta is not about what product are releasing next quarter. It's what are our 10-year bets? Yeah. Um okay. Uh I want Sorry. Yeah, you can. So Meta shares just hit an all-time high. Really? So the stocks the stock's uh performing.
I'm I'm curious um from your time at Morgan St. Thank you for that, John. That is incredible. But but generally up 5% over the last week. Um what h what how did Wall Street generally sort of like process the like meta AI story? Because now obviously at least the market seems generally excited.
Uh, it may get to the point where we did with VR where people are like, "Hey, whoa, whoa, whoa. Cool it on the metaverse, you know.
" Well, it's funny because like like like you add all you add all this up, even if the hundred million dollar offer number is real, it's like, okay, so now we're at like $1 billion in spend. It's 5% of the metaverse hole that was burning.
And it's like, you know, from a Wall Street perspective, you're like, I don't care about I don't care about $1 billion. I care about $20 billion because that was what was weighing on the stock during the metaverse buildout. But this is like AI is so much more productizable.
You can make so much more money from it upfront. And it seems like even though it's a big number, $100 million offer, it feels like the spend is maybe less, but I don't know. What do you think, Will? Right. Yeah. I mean, I think the I don't know the context fully of the hundred million.
I know people have all these theories, but like the the scale AI numbers we do know. Yeah. And like that that's a big chunk of their like I don't know they don't do those every day. Yeah. Um but they've done a handful in the past at that scale like WhatsApp was 20 billion or something like that.
Um, and I mean it's hard to read into like the the charts like I Yeah, I think people are generally like expecting that Meta will take AI seriously and are kind of happy to see change. Um, whether that is like justified or not, I mean, we'll see. Um, yeah, it was interesting.
The product side is going to be interesting because like they're not a B2B SAS company. Yeah. Well, so content, they're an entertainment company. And I think that that I I don't I don't think that people are are fully understand the potential of Gen AI around entertainment.
Like it gets talked a lot around, oh, you're going to be able to generate, you know, an entire movie or generate video games or things like that.
And I and I think that we haven't seen we've seen some fantastic examples so far, but nothing nothing that that is I I think like George Hotz was on our show and he was talking about how like basically AI is going to be like having five CIA agents follow you around all the time convincing you to buy products and like that is like one kind of dark bull case for dark bull case dark bullc case for meta in the context of AI because it's It's possible that that Facebook is already the best ad advertising product in human history.
Like period, hands down, there's nothing better and then could you make it like two, three times better? It's very possible. Crazy. Yeah.
I want to push back on the B2B thing because yes, yes, they don't sell B2B software, but I was thinking it in this terms like if you if you were running a company where your entire you only had one client and that client was Meta and you sold them CRM and infrastructure and uh LLMs to improve the back office and do uh you know censorship and and uh you know and reality checking and and uh looking for bad words and looking for improper posts and recommendation algorithms like how much would that company be worth and how much revenue would they be making just from that one client and I think it's in the billions because it's just such a large organization that it's potent it's potentially like just the B2B applications of AI inside of Meta are like it's a multi-billion dollar like cost center or revenue driver or something like that.
I don't know. I think a company at Metascale has already rewritten all that stuff themselves for the most. Like they probably have some services. Yeah. That they're using, but like they like they could they rebuild their own cloud like because they use AWS for a lot of Yeah. Yeah.
Um can they're not a compute company though and in terms of like platform stuff? Yeah. I feel like those companies once you hit a certain size and you just like have so many engineers that you want everything like Google rewrites all their own stuff. Yeah. Um, Microsoft rewrites all their own stuff.
Um, and can they is there another like a tale of end of that that has not been done yet that they can squeeze some more money out of? Sure. Um, entertainment will be interesting. So, one thing that just I was just reminded of um, have you seen the Italian brain rot videos? Oh, yeah. Love them. Yeah. Yeah.
So to like I know people it's like silly it's stupid but it there is something there in terms of this like communal narrative storytelling with a level of vibrancy that we haven't seen so far. It's like Looney Tunes.
Um but people just like come up with these where you could imagine Meta kind of eating a good chunk of the like videog market. Um if they have an answer to like V3 um where that becomes part of the platform. It ties into the whole metaverse thing of like creating stories and sharing these stories with your friends.
Um, but instead of like Instagram stories, it's like, okay, what's what's the evolution of stories? Yeah. Have you have you seen Higsfield? No. Higfield AI. I think it's an ex Snapchat team. Yeah.
Their new image model is basically already at a point where it's creating images that are photo realistic, but in the photo realistic it looks like some not fulltime influencer generated it after spending hours trying to create it and you can't tell that it's not real.
And so it it feels like um it feels like meta integrating that kind of thing where it's like y it'll be interesting to see how this actually plays out because everybody will be able to be a super tasteful creator or like generate these sort of unique styles and I'm sure at that point it'll switch.
Everybody will be like, "Well, I only follow organic farmtotable influencers. " Who knows? Yeah. Yeah. I feel like they their uh AI integrations so far have all been kind of weird.
Like there's these like Instagram accounts you can chat with and then there was the whole uh meta AI homepage disaster where old people were like leaking personal info without realizing it. But yeah, I guess Facebook's always had that.
Um, yeah, I think it'll they'll have to thread the needle on product in a way that we'll have they'll have to get creative because they haven't like it's been a while since they had a product innovation that was homegrown that really stuck like Yeah, it's it is interesting that they haven't tried to create a Studio Giblly moment by you're like going I was saying this they should just pre-render everyone's profile picture as a Giblly And just when you open the Instagram app, it's just like here, we did this for you.
It would be incredibly expensive from an inference cost, but then it's just like you could share it and it's just more virality around it. But I don't know. And and like even baking that down into a filter. No, be so like OpenAI branded now. They have to go. Yeah, I'm saying like an entire something like that. But yeah.
Yeah. B basically like the the the filters in the Instagram app should be like style transfer or like fully generative instead of just like color grades. That's clearly the next thing and they should just and they should just do that. Um but what what do you think of uh Sam Alman's ability to get through this?
like he's had a series of exoduses and seemed to continue to march on the anthropic departures then SSI and thinking machines like XAI like this is not the first time his first rodeo it's not his first rodeo and so there's this whole narrative like never bet against Zach he's been down on the metaverse came back he's been down on mobile with the HTML 5 thing and he came back from that went native and super dominant and bought Instagram WhatsApp super dominant in mobile.
Uh, and so never bet against Zuck, but then also maybe never get bet get bet against Sam and maybe they both win and maybe like the real loser is like I don't know some some other company or something. I don't know.
I think as long as OpenAI doesn't totally drop the ball on product or research like they have the center of gravity for the AI world like in the same way that no matter what Android ships people are not going to switch from their iPhones like something would have to go very wrong for people to not see OpenAI as the winner I think or as the kind of the default.
Um, yeah. I mean, the other thing is, uh, the average ChatgBT user is not waking up today being like, "Oh, I can't believe OpenAI lost a handful of its top researchers. " Right? They're right, they're just still using ChateBT as a Google replacement or they're talking with it as a companion.
So yeah, I think the lead is still very very real and um and I just and again it's going to create opportunities for people to say hey I want to go work on this product that hundreds of millions and soon billions of people use every single day. Right.
And I do think it'll get to a point, especially with like 2025 into 26 where there's a lot of product stuff that people seem like I think we're early on products and we'll keep getting better versions of the same things in a way that's like kind of predictable.
Like the image models will keep getting better, the agents will mess up less, but like we can already use those things and we can start building proof of concepts around them. And the question is like okay, what are the winning apps?
Um like I I imagine that like the real time clearly kind of thing some OpenAI is going to do their version of that at some point. Um other people probably will too. Like does that become a modality that people really want like live overview uh on their screen on their phone?
Like what's the what's the way that people are going to be interacting with AI generally? Um because a lot of those use cases like it's they're already smart enough. It's not about making the models like way smarter at whatever. It's like how do you have the model be useful or fun, engaging?
There's a bunch of paths here going back to the Cluly thing where one Roy is like right about the UX and like wins the market and the other option is like he's right about the UX but like doesn't win the market and then there's like you know it's just not the right form factor or whatever.
But any of at least the first two outcomes are hilarious in that in that Roy Lee ushers in the new the new paradigm of uh for for engaging with with language models.
I'm curious any um is all the stuff around every B2B SAS player descending on the sort of like single interface like a a chat interface that generates software was that predictable to you? Do you think that's do you think that that's like part of a multi-year trend or is that just FOMO? I mean like Copilot was early.
It was like 2020 21 like the first GPT3 copilot came out and like that was already one of the early like LLM applications that people were interested in at all and then it took until cursor for it to really like I think cursor plus like 35 sonnet was when it became a thing that was good enough that people were excited about it.
Um, and it really ushered in the trend because people were starting to find it more useful than a toy and like a thing that actually want to use day-to-day.
Um, and so I think like that's one path is like and then the more background agent kind of things are like starting to take off now, which I imagine like those will get reliable enough that they're like useful for cranking stuff out. Um, they already are kind of depending on what you're doing. Yeah.
Like I think yeah, we're about halfway through the year. What What are What are kind of like the big moments that you're kind of tracking the the OpenAI open source model could be one? Whatever meta launches next, right?
I'm assuming they're going to be dark until this new team can can really cook and and bring something great. May maybe they don't do anything this year, but I would assume they they come with with something. What else are you are you tracking?
Yeah, I mean for me the 03 release in chat GPT was like a pretty like game changer kind of thing where it was like we saw with like deep research that like okay they kind of figured out how to make agents work but it was also just like this one version of an agent and 03 you can kind of get it to be a pretty general agent where it can like do some pretty complex stuff that was kind of new to see like the geogesser thing was crazy.
Um, and having that as a like vision of like what AGI starts to look like, I think is pretty cool. Of course, from the like research open source world, there was the Deep Seek as like the RL craze taking off.
But like I mean, I'm I work on RL, so I like obsess over it and like think about it a lot, but I do think we're really starting to see these recipes at least in the broad strokes of like, okay, here's how the LM thing can go. We figure out what we want it to do.
we give it some tools, we set up these environments, we figure out how to how to evaluate it, and then we can just kind of like let it go and these things get better at doing those things via trial and error.
Um, and so like I like I think that is one way to kind of forecast where things are going is just like what are the plausible use cases that people want to use LMS for? They want an agent to do XYZ. Um, and then how do you make this a thing that you can help climb?
And I think like you see a lot like there's been a lot of news about OpenAI trying to like sell their RF service. There's these startups popping up. There was a RF service uh reinforcement packing. Okay. Yeah. Um and so like they're pushing this pretty hard.
This if you're spending over $10 million like Palunteer uh OpenAI will also customize a model for you. So they have like a uh serious heavy like a heavy paying customer one, but they also have like a more like in the thousands of dollars service. Sure. Where you are getting to essentially do fine-tuning on 04 mini. Yep.
Um and that is a little more self-s serve. Um but they're also like doing consulting around that the forward deployed thing around that. Is that an important strategy for open AI to c to to kind of stay in the game against open-source models like llama? Yeah. But I think also against other like thinking machines.
There was some report that this is a version of the strategy they're taking is like go to enterprises y talk to them about their problems. uh turn these problems into things where you can create really good customized agent models. Mh.
And if the like that's one potential roadmap towards like having more like AI everywhere is just like having services that were whose job is to like come in whether or not it's fine tuning or not just like make them into agent-shaped tasks um where you can then optimize the model and have someone craft the model experience like for you because like I think a lot of enterprises are still like in a spot where they don't really like there's there's not as you see by the like the talent war there's just not that many people in the world or in the market who really understand this stuff at a level where they can go make it happen and so like the open source thing on one hand it's cool that you in theory can go do it but it's also like there's not that many hands who can who are equipped to like go make the thing happen actually fine-tune llama 4 or whatever yeah we actually talked to a startup that's doing basically that like small models for specific business use cases like okay you just have a ton of CSVs that need to be turned into JSON and we build you an LLM that just does that or whatever.
Um, interesting. How how do how do you after the the scale news and then there's a bunch of players um popping up or you know everyone from uh Merkor to Labelbox uh to Handshake a lot of different people competing for that market. How do you see that market evolving over the next one to two years?
I think there were certain people saying scale got out at the perfect time in many ways. Um but clearly there's a lot of at least gross revenue up for grabs right now in the near term. Yeah.
I mean I think like the t the broader sphere of like creating stuff to train models on with humans in the loop to do that curation is going to be important especially for these uh domain specific applications.
Um, I think it's going to be less like, oh, we just need more tokens and it's more about we need uh curation of goals and objectives. Um, because like tokens, you kind of hit diminishing returns pretty quick in terms of just like more internet text or more like human written math solution examples.
Um, and it doesn't scale super well. But the thing nice thing about the task specification is you can kind of like pour in more compute without necessarily needing more data. Um it scales much better with compute.
Um whereas we don't really have like a great way of like spending 100x more compute per pre-shain token other than make a giant model. Yeah. Do you think uh Llama will stay open source for the foreseeable future? We'll see.
I it wouldn't surprise me if they go kind of the Google route where like they are still they still do open source.
They still have the Llama brand, but it isn't like their flagship thing where they start doing whether it's for internal products or it's they really want to show that they're the winner um releasing closed models uh via API or other services where Yeah. Do they do they have the infrastructure to serve a closed model?
I mean you mentioned that they use AWS so it'd be kind of awkward, right? Yeah. I mean, I don't know that they really want to that much. I I think that it'd be something that comes in a product. Sure. Yeah, that makes sense. Or maybe they partner with like you could see them partnering with someone like a core reef.
Yeah. Um or Nvidia directly. Um Nvidia's hoping to get into the infrance serving game, it sounds like. Um yeah.
Do you think uh so so uh one of Swix's takeaways from the the Wired article was that um like one big development that's coming is when Stargate comes online, OpenAI will have the largest single cluster for pre-training, but it feels like we might be at the end of that game and we might be doing more compute inensive work in a distributed fashion.
And so maybe having it all in one place is a little bit less relevant to just, you know, oh, Trump card, Stargate, boom, I win. I have the best model by far. Um, how how do you think about the impact of Stargate on like the AGI race?
I mean, I think the biggest experiment that they're definitely going to do is take a pretty big model. I don't know if it'll be like quite as big as like a 4. 5, but like a big model and do way more RL on it than anyone's done. like what does like an '05 level of RL look like? Okay, see how good that is.
See what what new that unlock. Do you do that though or could you do that just across a bunch of data centers? Because I've heard like a lot of RL is like you're generating you're doing verifiable rewards. You're generating in a bunch of different data centers.
You could do it completely distributed and you don't necessarily need a Stargate to do that. You can do it distributed like it is very inference heavy. There is still like a lot of weight updating. You have to sync the model across. you have to keep sending the training model to your inference workers. Okay.
Um and so having it colloccated certainly makes it easier. You it is easier to do it in a distributed way if you want to versus pre-training. Um but it's not like trivial.
Um but I think like a lot of this is just data people are building the data centers without necessarily knowing exactly what experiments they're going to run. They just know more stuff is coming.
But also, a lot of this is going to be inference where like they want to be able to do like Meta has a ton of GPUs they just use for Instagram ads. Yeah. Um recommendations. Um a lot of Stargate will probably be serving Gibli. Like I mean that makes a ton of sense.
Like I'm I'm running into rate limits on literally everything like like Google. It's like how how do they run out of TPUs? They spend 60 billion on capex every year at least. Um and 03 Pro too. I I mean I I get timeouts on this stuff. Last question.
Uh I'm guessing you weren't surprised to see OpenAI leveraging the TPU for for inference or reporting to be starting to use it. Was Was that kind of I mean it makes sense that they are considering all options