AlphaFold creator John Jumper on turning a 50-year biology challenge into a 2-minute AI computation

Jun 16, 2025 · Full transcript · This transcript is auto-generated and may contain errors.

Featuring John Jumper

this go to openai or claude and say chatbt make me a ninef figureure arr mistakes. when it says, "Well, I'm actually just a language model and I have just start threatening it and you know, see for yourself. " So, it doesn't quite work that way. Anyway, our first guest is here. We have John Jumper from Google Deep Mind.

Welcome to the show, John. How are you doing? Boom. Welcome. Doing well. Uh, nice to meet with Sergey. Yes. Yeah. uh just this s it was just completely random, but this this silly article that looks like straight out of one of those clickbait websites that's just farming farming attention constantly.

Um but but I I mean I'm sure we could go into this, but there's so much more interesting stuff to talk about uh in the deep mind world. Uh first, would you mind giving us an update? Where are you today? What's going on? Oh, I'm at the uh Y Combinator AI startup school. So, this is a pretty cool event.

They've brought in a bunch of people with startups are looking to found them and they have a a great speakers list and you know just kind of a fun place to be and so I'm backstage. I don't know if you can hear it. Oh yeah. Yeah. We can definitely the plant and the everything else. It's a little bit of ambiance.

Fantastic. Are you giving a talk? Could you give us a little bit of background on you to kind of set the stage for the discussion? Yeah. Yeah. So I'm giving a talk.

I think I'm well I'm very certainly best known for work on AlphaFold and really this is uh doing work in AI for science and trying to solve or really predict the results of really really hard scientific experiments and do them with AI and we've been quite successful hence I'm here.

Uh so the system we've built and you know there's this experiment that biologists do to understand how the body works really important to drug development really important to a lot of other things give you an idea of difficulty it takes something like a year to do if you think of it in cost it's something like $100,000 and we have an AI system that we've trained that will do it in a couple of minutes uh to near experimental animated openly available and so we see it used from everything from kind of designing vaccines to finding missing aspects of our own biology to everything else.

And I think it's also this kind of symbol of the pro promise that AI is going to solve these really hard problems that humans don't right we there's a lot of really exciting work and how do we do things that are you know really impressive examples of human capability be it you know writing be it making images but there's this other aspect of how do we use AI to solve these really really hard problems that if we want to solve it we go do a year of experimental work and that's that's what we that uh at the science group at DeepMind work on we try and say how are we going to solve these really really hard problems with AI and sometimes it works spectacularly well.

Can you tell me about the initial project spec for Alphafold? Was protein folding the most obvious choice or were there other targets that you thought were appropriate for deep learning? Um and and how how did that initial uh project kick off?

Was it let's go collect all the data or or let's just start experimenting on like synthetic data. How did all this like play out in the early stages? So one thing to say is this problem had kind of stood as a crown jewel of hard problems in science um for a long time.

So something like 50 years that people had been trying to build some computer system any way you could. And I think it was kind of obvious also that this felt like a problem that AI could do something about. One answer was data.

That in fact at enormous kind of effort and forethought and expense um scientists worldwide had basically put every structure ever solved into what's called the protein data bank. And so for about 50 years people had collected this data set. The time we were doing our work was about 140,000 structures.

But it represented essentially everything. Everyone had the same data. There was some thoughts I think early on about, oh well, maybe this is a good problem for optimization and combinatorial search.

That didn't really turn out to be how we solved it, but there was this thought that maybe it's kind of well teed up for the methods that were looking incredibly powerful and are, you know, incredibly powerful in terms of Alph Go and other search uh style methodology.

And then I think also there was actually some grassroots where there was a local group of people that were going into a hackathon at the company kind of do whatever you want for a week and literally googling grand challenges in biology.

And so that was the other way that this got started and you know it was on the list and and all of these kind of came together. And I think what it turned out to be is we didn't at all solve it the way we thought we were going to do it and there were many blind alleys.

So, we got into it possibly for some right reasons, some wrong reasons, but we stayed with it for a long time. Talk to me about the state-of-the-art of protein folding uh beforehand. You mentioned like a year and $100,000.

Um, but but what did that look like in terms of the actual problem of like defining the structure of a protein based on a sequence of DNA? Um, is is this is this, you know, an undergrad or a graduate student kind of pipetting stuff and and putting it in a centrifuge?

Like I've seen some biolab work, but try and walk me through actually what it looks like. Is it just using a different machine and pushing a button? What's driving the cost? Is it labor? Is it is it, you know, reagents and and equipment and expendable things? Like like what was the state-of-the-art at the time?

So it's a very good question and underlying it is a false premise. Okay. You have this premise that there is a series of steps. Yeah. That you're going to execute and you know you're going to maybe it's a year long but in January I'm going to do this at the end of January I'm going to succeed at that.

I'm going to go do the next. Yeah. And you know you can think of it the same way as training an AI system isn't just I want an AI system. Let me press the AI button and then go get coffee. It's really about research and iteration. And so four scientists going to solve the structure of a new protein.

They have this huge numbers of steps and one like the very first thing they have to do is make enough pure protein to study this thing. Mhm.

And I, you know, I did my PhD in biohysics and I remember sitting in a lab meeting where someone's for 6 months talked about how they couldn't even make their protein to get started on their experiments.

And I remember saying if you if you talk about protein, you know, making protein in one more lab meeting, I'm going to start talking about my compiler errors in the next meeting. But just like building and doing the work to get enough product to get started is enormous.

And then then you have to convince it to form this very regular crystal structure that is not at all natural. And no one really knows how to do this. They just have some ideas that maybe or maybe not work.

And so they try many many combinations and you know always look like one that really brought home the difficulty for me is one paper I can't remember if we trained on it or evaluated on it. I think we evaluated and I looked in the appendix of the paper and it said after more than a year crystals began to grow.

you have to make crystals to uh solve a protein structure. And so they were literally trying things and probably that whole year they were trying things just to look in their cabinet, find out something from a year ago worked. And then after you do that, you go to a very large synretron and there are many other steps.

But I would say the real answer is there's wide error bars and there's enormous amounts of experimentation and cleverness and this is why one or a couple protein structures can be a PhD. You can be doctor at the end of this. Mhm. Uh, talk to me about the timeline between that that kind of was done. Yeah. Yeah.

Talk to me about the timeline between Alph Go and Alpha Fold. Um, and the lessons from the Lisa Doll match, the move 37 moment and and just uh the different paradigms. I mean, I remember like part of the beauty of learn and go is that it's this very defined system that can be simulated at an extremely high speed.

um it's like kind of a prime environment for this reinforcement learning uh strategy. What was taken? What was different? Um and uh and and and how did those timelines match up? So already coming out of Lisa at a kind of company level demis level people were getting interested in this question.

I can't remember exactly I think it was 2016 when the work really started. I actually joined slightly after that. Sure.

The early work was in kind of can we use reinforcement learning and these models these energy functions where you're trying to treat this as a minimization problem just a really really hard clever minimization problem and I remember even like when I came in it was kind of I come from a protein background and there's a way in which this is obviously not the right idea in that we don't know the full you know God's energy function for proteins.

We don't know this thing that if you just minimize it, we're sure that's the right answer. At least if we do, it's quantum mechanics that we're not solving.

And in fact, a big problem is that really you needed to solve the kind of how do we how do we optimize this and what are the rules that we're optimizing under at the same time? And it turned out to be actually much more datadriven supervised learning.

And what really started to make the difference is when we sat down and we you know the state-of-the-art kind of before it was growing there was progress was in using convolutional neural networks a certain design you can also try transformers they're not really much better for this but there were a lot of kind of people trying to take machine learning off the shelf and say I'll just apply it this is an application problem and I think what we did really really well especially in Alpha 2 the one that really worked is that we said no no no this is a machine learning research problem.

How are we going to rebuild our kind of core components? How do we get inspired by the transformer but build something different in order to make a system that learned really well?

And we can show in fact like an external group did a very careful experiment where they took Alphold 2 and they trained it on 1% of the available data and they found it uh more accurate than our alpha fold one system. So even between one and two. Wow. You can see what is 100fold in data.

And so in a certain sense all we did was a lot of machine learning engineering. Yeah. That got us a hundfold better data efficiency than we used to be. And in fact use the exact same data as everyone else and that we had used for Alpha Fold one. And oh we're having some trouble.

Really in these core ideas that enabled it to learn much more. Yeah.

I think all of these I think the ideas evolved and still I think you know there's room to play and there's all these different problems and I but it did kind of preage how important learning directly from data has become uh in terms of this and then RL to optimize uh performance and achieve other objectives. Okay.

Um I want to talk about the reaction to Alphafold in the public markets. I know you're not a public markets analyst, but I was very surprised that, you know, this this 50-year age-old problem, this incredibly tough, you know, millennium challenge level uh problem is solved.

And I didn't see biotech stocks like really pop. And you'd think that if you took this cost out and this uncertainty out of the system, you would see benefits. And we're certainly seeing all sorts of companies tell stories about, oh, we're using large language models all over the place and their and their stocks pop.

And so, um, was that in your mind like a misunderstanding of the impact of this technology or or was it more that hey, we're this is just one step down the path to actually increase the output of biotechnology companies. I think I think it's a very interesting question.

I should say I'm not an equity analyst and I would actually be after this podcast you should go back and do the same experiment for crisper which was undoubtedly a big moment in biology and I I would be shocked if you saw a pop when the crisper paper uh hit.

So there there actually was a major pop during crisper but it was more about uh like companies that were directly linked to it. Uh so so there were there were a number of companies that were that were trading on the back of we will be commercializing crisper technology specifically.

There weren't as many pure play uh alphafold companies if that makes sense. And so the I I don't think there was as much of a vehicle and some of these biotech companies like they're just not traded on a cost basis. So maybe that was what was going on.

Um it's like yeah okay you're going to save a million dollars next year on protein folding. I think I think you shouldn't think of it too narrowly and you want to be really careful.

So if you think about it from the $100,000 point of view um then it should already be immediately obvious that a single protein structure wasn't the gap to a drug. A drug is about a billion dollars totally in R&D. Yeah.

And so you can already see that that enor that many orders of magnitude what I think Alphafold I think but there are two more things that are really important.

One is and especially when AlphaFold first came out, it was unclear if we just solved a grand challenge or was this going to be a really practical system and the practicality became much more apparent especially in six months when it was available to everyone.

I think uh you know people started to say wait a minute it's actually solving my problems. Um the other and we see it here all the time from biotechs using it.

I think when alpha fold 3 came out it was apparent that the same thing would describe protein small molecule binding and of course we have work uh within alphabet and isomorphic labs it's really about how do we take these technologies and make them better. Yeah.

And I think all of this is to say one is to say really you know certainly one of the big drivers of cost for biotech is clinical trials. So if you want to increase the success rate you need to understand biology better.

Yeah, I think there's been some really important work in terms of technologies either using AlphaFold or downstream of it such as the really incredible advances in protein design and maybe not in the public markets but if you look at private valuations from everything from evolutionary scale to Zera to others you've seen some really enormous funding rounds where certainly the startup community believes that this is going to hit it big and I think you do see some effect in that you need more techn technologies on top of this and it's not a single a single problem but I think it will also make a larger and larger difference of course there are more problems to be solved and without jumping into the valuation and but you also see these strong valuations on the companies very you know are trying to figure out how to use this and how to say kind of this door has been opened do you think in terms of AI and biology and predictive biology do you think that the FDA will be able to adapt.

Do you think the FDA will be able to adapt quickly enough to advancements in uh research at the intersection of AI and biology? Like is there anyone at the FDA that's AGI pilled and is like you know thinking 5 10 years out um yeah how the underlying systems need to change that's a very interesting question.

Yeah, because the the what what will be concerning is like if if um we have all this great technology and we can't implement ultimately decrease the cost of drug creation if all the costs are in trials. Yeah, that's a great question.

Well, so one thing I'll say and I'm outside my area of expertise and I don't have any contacts at the FDA, but let let me say this version of it. The these are tools that make predictions. You still check them.

You don't check them by doing the exact same work as you were going to do otherwise, but you you check the consequences. You say if this is true, then this change or this drug will have that effect. And what you ultimately look at in terms of safety and efficacy of drugs is real world evidence.

Now you look at clinical trials and they have this 90 plus% failure rate. So you think about that as being really one of the key drivers. It's not so much are we just going to totally trust computation. we'll just do AI and because it says AI, we'll kind of leave it.

It's going to be that we're going to do AI and then when we go in the lab, we're going to have a much better sense of what's really going on. We're going to have much better predictions and that's going to let us do the right experiment in the first trial or the third trial instead of the hundth trial.

And I think that and that that doesn't necessarily require you to shift your standards of evidence for what is evidence of safety and efficacy so much as it's about, you know, you as researchers where you have a lot of freedom on how you choose the experiments you do.

How do you choose the right ones that will advance you toward treating that disease? That makes a ton of sense. Uh well, thank you so much for joining. We'll let you get back to it. I know it's a busy day over there at AI Startup School.

We really appreciate you taking the time and we'd love to have you back to go way deeper when we have more time to dig into everything. So, thanks so much for joining. Thanks a lot, John. It was a pleasure. Thank you. Have a great rest of your day. Cheers. Talk to you soon. An absolute legend. An absolute legend.

An absolute dog. Dog. That's probably the first time he's been described that way. Yes, it probably won't be the last. Definitely won't be the last. Uh it is it is uh amazing. I mean, we should have gotten in I'd love to go deeper into the way Deep Mind has been uh reorged a bit. Google Brain has merged in.

They're working on a lot of different stuff and uh and the science side and and the research side is something that uh Google has been fantastic at for for decades and continues to be. So, uh lots to dig in there. Anyway, uh let's let's go back to the timeline and go through some posts.

Marvin Vanhagen says, "Plane flying over Stanford, graduation right now. Congrats. Don't work for Elon. Uh, 20 years after Steve, stay hungry, stay foolish. Commencement speech. " So, this is like a proTrump uh post. Like they're they don't want you to work for Elon because they're on Trump side.

They must be extremely magapilled. Yeah, it's Well, it isn't red. They must actually just be in favor of big government, you know, massive spending bills. That's probably what's going on there. Um, now I always wonder with this type of thing.

I mean, this is the the person that that's going to go through the effort to I imagine they're there. I mean, you got to give it up for them because they're doing some outdoor advertising. We love out of home.

And so, you know, if you have a message to get out regardless of or not, we we regardless of whether or not we agree with it. We always support out of free speech and out of out of home advertising combined is is kind of the sweet spot. So, yeah.

So, if you have a good message to send, uh, throw it on throw it on a plane. There was a big debate on the timeline. I think this works better than than skywriting. By the way, skyriting is potentially overrated. I I think so. It just dissolves too fast. Totally.

And you can't really get a message out, but this I mean, you look at this photo and if you go to the next slide, you'll see it's zoomed in. Like, it is incredibly legible. Yeah. Incredibly legible. And so, um, yeah. Yeah. We should had a competing plane there. Congrats. Please work for Elon.

Yeah, Elon, where where's your consider applying to roles at companies like SpaceX, Boring Company, Neuralink? Yeah, just just fly hundreds of planes. I have been seeing stuff on the timeline about like, oh, maybe there's like a talent exodus from some of the Elon companies to other companies. There was one post I saw.

One post you saw that and I wonder how real that is. You know, some of the stuff can be trashed. The thing is Elon's also laying people off. So, like, who knows? Yeah.

And and when you're at a scale like Tesla or any of these other companies, at any given point there could there's a bunch of people that are leaving to go work on other things. One person Microsoft laid off a thousand people and it's like 1% of the workforce. Yeah. It's like Yeah.

Well, the big debate over the weekend was between you and Ashley Vance. It was more of a collaboration than a debate. More of a collaboration, but but um really trying to get to the bottom of an important story. Ashley said, "Why is the US not breeding tens of thousands of gorillas?

" I want to know what sparked that post for him. Like like why did he choose to post that? He's just got so gorilla pelled on a on a Sunday. Anyway, I said I'm about to find out. I went to 03 Pro on Chad. Very funny. Tyler Cowan retweeted this, which I love. Um and there's some answers.

Gorillas re reproduce very slowly, so it would take a long time to scale up. Uh confer conservation programs keep the population small on purpose. I don't like that.

US zoos follow a coordinated plan called a species survival plan which aims to keep around 350 to 355 g conservation program is against widescale mass breeding that is crazy this number's chosen to preserve genetic diversity over the long term I feel like if you grew the population to 10,000 like boom you would have I guess overbreeding leads to surplus gorillas with nowhere to go what about to our students we have plenty of space we'll take some gorillas I think you could scale up the number of zoos as well and then also like obviously put them to work like you know horses no one's like oh yeah like you know too many horses you can just ride them around everyone can have a horse everyone can get a gorilla yeah they probably need to be domesticated the acceleration of gorillas too could be great for city kind of transportation you know stop and go traffic gorillas are really quick off the maybe gorilla domestication is the next uh biotech project let's check in with Tyler Cosgrove over on the intern cam get an update Oh, okay.

He's moved over to big pharma. He's he's a biotech company. Give us an update. What have you learned? What's new in your world? I'm still looking at, you know, various drugs, but but I have found some interesting people. I I would like to talk to you about this. Okay, hit us up.

Um, so the first one is uh, of course, Derek from more plates. com. Fantastic. So, everyone, you know, famous he makes YouTube videos about, you know, peeds. So, um, I think his experience would be useful for for one, just the increased uh, attractiveness. Oh, yeah. That'd be big.

you know, he has a long history of um like functional looks maxing. Yes. Um and also for like athleticism, right? If I'm gonna be chasing balls, Yeah. Um I need to, you know, be stronger, more fast twitch muscles got to be working over time. So the next guy is uh Mike Henry.

He's the CEO of BHP Group, which is the largest uh publicly traded mining company in the world. Why are we mining? We're mining because I need lead to make me dumber. Oh, okay. Yeah, obviously he should be an expert in lead there. Mix that in, you know, mining mining company.

So, making more of a cocktail than a oneshot drug. Got it. Okay. And then uh so so there I have increased, you know, I'm look maxing and I'm making me dumber. Yes. So the third one I need Well, I guess there's friendliness. Friendliness, but also I need uh I need to have more hair obviously, right? Because I'm a dog.

So I'm going to talk to um I think it's Gunter Khan. He created Minoxidil. He did. Okay. Interesting. Is he Is he alive? No. Oh, okay. Well, we need a live person. Well, actually, he might I'm not sure. I'll look into that. We'll have to figure that out. Um, we need experts we can bring on the show. Yeah.

So, those are the three I found so far. I'm going to keep looking at We need friendliness. That's the key one that I'm most worried about. That's the one that's the most the the elusive the elusive drug for h for for friendliness. And that's the one that we need. We see a lot of negativity on the timeline.

We try and we try and encourage people with the what would your mother do ethos. Um, imagine being able to show people so friendly a link to them to say, "Hey, take this drug. Injecting this once a week, you'll be friendlier on.

Could make you, you know, an order of magnitude more friendly on the No more no more aggressive quote tweets, no more clapbacks. Those will be a thing of the past. I didn't actually know that golden retrievers are actually uh have pretty significant intelligence when comped to other dogs.

They're the fourth most intelligent breed behind border collies, poodles, and German shepherds. Narrative violation. Narrative violation. This is what you want. This is the guy who's on the left of the bell curve meme.

That's where everyone thinks the golden retriever is, but the golden retriever is secretly the guy on the right. Yeah. But that's the beauty is that they're the same guy, you know, like he doesn't feel the need.

The golden retriever is just because in the age of artificial intelligence, AI is going to increasingly push everyone who's trying to be the guy on the right into the midwit territory. Yeah. And so you want to just go full guy on the left or at least have people expect that you're the guy on the left.

Well, we have a post from Bucko Capital. Wait, really quickly. If you're breeding gorillas and you're going to try and sell these gorillas across the world, you're going to need Adio customer relationship magic. Adio is the AI native CRM that builds, scales, and grows your company to the next level.

Anyway, there's someone in the chat right now saying, "Adio has a great UI. I would invest in them. " Let's go. Let's give it up for Adio. Adio. Get on there. You can check it out. Let's go over to Buco Capital Bloke. He says, "Apple has more of an impact on job creation in China than all of China has on America. " Wow.

This is obviously Yeah. So, there's a entire paragraph here. The size and influence of Apple aren't properly understood in part because they are so difficult to fathom. How can it be for instance that demand from China's 1. 4 billion people indirectly supports across all industries between 1 million and 2.

6 million jobs in America? Whereas by Tim Cook's estimate, Apple alone supports 5 million jobs in China, 3 million in manufacturing, and another 1. 8 million in app development. That upside down contrast boggles the mind.

One super corporation has more of an impact on job creation in China than all of China has on America. Yeah, that is that is wow. This is obviously from Apple in China. Fantastic book. We had the author on the show and uh Tim Cook's been Tim Cook is quoted a bunch in the book. There's a ton of great scoops.

Uh I've been listening to the book a bunch and it's and it's fantastic. There's a whole bunch of really interesting deep dives. Some stuff you might know, but it's woven together in a very interesting way. Highly recommend going and picking up the book or the audio book Apple in China.

Uh uh Ben Thompson's been singing its praises as well. he had the author on his show uh did an interesting uh podcast with him and really dug in at a layer deeper that I highly recommend going and listening to.

Um, fascinating and it'll be interesting to see how this affects uh this could be this could Apple in China I think could be one of those books that becomes um a reference point for DC policy makers in the same way that uh chip war became kind of a a playbook for uh the chips act and uh the hundred-year marathon uh uh also became a kind of playbook for a renegotiation of the trade policy between the US and China and so uh these books don't come along often But when they do, uh, give them a read.

Make sure it's heavily, uh, heavily annotated and and highlighted. Uh, anyway, if you're looking to invest in American companies, get on public. com. Investing for those who take it seriously. They got multiasset investing, industryleading yields, and they're trusted by millions. Go to public. com.

Uh, what is this thing about Frontier Valley? I saw you post. I put this in there. Uh, I invited the founder, James, on the show. So, Frontier Valley is a new special regulation district in SV that's larger than MoMA.

Once approved, it will be America's epicenter for physical AI and deep tech innovation and will also be a template first a template for robotics first cities that can be replicated nationwide. What is going on here? Is this on top of this on top of an airport or something? Like where where are they going to build this?

Is this landfill? Like this this looks amazing. It just seems like so ambitious. I don't know how they're going to pull this off. Um, anyways, I invited the founder on to learn more. I thought that uh it was at least it it reminded me about some of the stuff that California Forever is working on around ship building.

So, anyways, we need more big ambitious uh projects like this and I hope they can figure out a way to pull this off. Yeah, seems like it's gaining steam. Uh post in here from Reed Hoffman. Reed Hoffman says, "Some AI industry leaders are predicting white collar blood baths.

Even the most inspirational advice to new graduates lands like a band-aid on a bullet wound. Some thoughts on new grads and finding a job in the AI wave. Got to go into art history. It's the name of the game. Huge opportunity to go into art history.

" Uh Reed says, "What you really want is a dynamic career path, not a static one. Would it have made sense to internet proof one's career in 1997 or YouTube proof it in 2008? What when new technology starts cresting, the best move is to surf that wave. This is where new grads can excel.

College grads almost always enjoy an advantage over their senior leaders when it comes to adopting new technology. If you're a recent graduate, I urge you not to think in terms of AI proofing your career. Instead, AI optimize it. Sure. I think that's a great good framework.

What do you think internet proofing one's career in 1997 would have been? Like I it's unfathomable because like the internet don't start a newspaper, print newspaper. Okay.

or don't go work for one, I guess, and instead go work for No, but it is but but the the immediate takeaway I have here is is you can be worried about job loss from AI and trying to pick the right job that's not going to be replaced or you can proactively figure out how to leverage AI to be just vastly more efficient and productive.

I'm just thinking about like like internet proofing feels like find a career that will not be disrupted by the internet or YouTube proofing means like find an industry that will not be changed at all by the existence of YouTube. So it's not about okay like what what was disrupted by YouTube.

I mean, I guess potentially like linear advertising or linear TV and that's still boom like if you went if you went into reality TV in 2008 like you did great I imagine but then also there was like kind of the death of Hollywood.

Well, the thing about the thing about people that are entering startups or or in the industry already, if you want to generate massive wealth and have massive impact and work at a company that becomes significant because of your participation, you have to work in an oftentimes have to work in an industry that's experiencing, you know, rapid rapid growth, right?

And so, you know, joining traditional entertainment world, uh, or getting into the reality t TV business when YouTube was taking off. Yep. Probably not going to have that sort of ridiculous rapid growth. Yep. Um, you know, maybe working for, you know, some some creator or or actually joining YouTube itself. Yeah. Yeah.

I mean, certainly he's saying like like ride the wave, lean in. I'm I'm just wondering about like the idea of internet proofing a career or YouTube proofing or AI proofing a career. It all feels the same. It's it's like become a furniture maker or something or become a become a plumber.

Um they're they're all kind of the same same sides. It's like if you're if you're a lawyer who's trying to inter internet proof and says like we're going to be the one law firm that doesn't use the internet like that would be a disaster. And it's like it's kind of unthinkable. And same thing with like with with YouTube.

If you're working for like in a marketing agency and you're like, "Oh, we're going to we're going to be the best at at content