Dwarkesh Patel on AI acceleration, the missing ChatGPT moment for agents, and what's underrated about intelligence
Mar 28, 2025 · Full transcript · This transcript is auto-generated and may contain errors.
Featuring Dwarkesh Patel
you guys it's great to have you good to see you I heard you've been doing uh space repetition to practice uh for interviews so I I I've been doing it it says uh your name is dwares Patel you have a podcast and you interviewed somebody named Mark who owns a website called facebook.
com can you tell me about that yeah in fact I um I made an entire eny dag just for you guys for pring no way we H how's how's the book launch been going how's the Press tour how are you doing today it's been going good um the people have all kinds I have a book out called the scaling era and it compiles the interviews I've been doing about Ai and um you know people like as you mentioned Mark and Demis and Dario the heads of the AI LA but also researchers and engineers and philosophers and economists about uh and it's been really interesting because AI is one of those topics where there's so many fractal questions you could ask about what is its impact going to be how are we going to train it how do you even think about a super intelligent so um been dealing with a lot of different kinds of questions uh which has made interesting it's great uh how contentious was the was the sort of book process did you know you wanted to go with with stripe I imagine you could have had your pick of the litter in terms of like Legacy Publishers that promised you all sorts of things but you happen to own you know sort of your own distribution so maybe it was just about picking the right sort of underlying partner for it it um it honestly was never a matter of picking a publisher the H the main question was whether I should do a book in the first place and so some folks at strip press reached out and if I was was going to do a book it would be with them because um as you know their reputation perceives them uh and so then there was just like deciding do I want to do the book or not and I I I think retrospective was the right call I think I'm really delighted with how it turned out uh I want to talk about acceleration are you feeling the acceleration like mathematically we are not accelerating GDP yet although technically we are today I think GDP ticked up just a little bit which means we're technically accelerating but it must be your podcast yeah to and hopefully we're responsible but you know energy use is not is not accelerating even some of the benchmarks are kind of saturating we're not seeing acceleration curves we're seeing solid growth but at the same time it feels like we're on the precipice of acceleration but what does it mean to feel the acceleration for you um I I think it's a really good question because we have these models which we think are smart and we as you say we haven't seen them even automate the things which we like when we're having a conversation we'll be like oh the call center workers they should be really worried and um there they still have got their jobs right so what's going on um there's as you know people have been talking about what we need to um make the models cheaper when DC came out they were like oh it's going to be Jeff's Paradox and uh we'll be using them way more and that they're cheaper I think the real bottleneck is just we got to make them smarter I don't like they're already so cheap it's like 2 cents per a million tokens or something ridiculous I think the the real bottom L for me using them more is not their price but them being more useful being able to take over more parts of the economy yeah do do you think that intelligence is all we need uh uh Andre Kathy was talking about the importance of agency talk to other people about maybe it's all like what makes humans effective it's not just intelligence it's also agency it's also coordination friendliness networking Tyler cow talked about like do we even need to map the different parts of like the skill tree that humans have like Charisma wisdom like underrated is like these like the AIS are getting more intelligent but they're already like maxed out on wisdom right but do we need to think about a different taxonomy here or do we just need to max out intelligence and everything else will come no I think you're absolutely right I think um you need a lot more skills there has been this trend in AI where whenever there's a big breakthrough we think we've automated a large part of what intelligence is and in fact in retrospect it's clear that it was only a beginning so the big example here is when deep blue came out and beat Kasparov at chess um people thought that this was like a big breakthrough in intelligence in general because we thought that what chess required was the general intelligence and you might have heard this concept of AI complete problems where if you solve this problem then you've solved intelligence so people said that on self-driving um the touring test was supposed to be to AI complete we've gone through all of these subcomponents of intelligence and afterwards we realized there's actually still more left to it um the thing that's sort of underrated is not even agency per se although that's a part of it I think the thing that's underrated is we humans have this Global hive mind where the reason we can make iPhones and we can make uh buildings and whatever is not just intelligence and also not just agency it's the fact that there's so much specialization there's so much um Capital deepening people are just like doing things trying different ideas uh AI need to be smarter in order to do that they need to have more agency in order to do that but once they can you just if you have millions of a running around trying different things that's that's when we get the real acceleration and you'll feel it in your blood when when you say like millions of AIS you've said that you think there will be billions of AIS running around like what does that actually mean like how can we Quant are we just talking like chat GPT daus are we talking about individual threads because you can inference multiple threads on a single ship like like how how are you thinking it when you think it sounds more concrete when you say there will be billions of AIS but is that really just like there will be a hive mind that is equivalent to billions of people are we talking about like one or maybe each model is like one entity but then there's sub threads how do you think about like that concept of like billions of AIS honestly I don't think anybody knows I think it'll just like depend on how the tech tree shapes out sure um I have heard like these wild ideas in some of these interviews where um one person at J coacher mentioned this idea of The Blob and the blob is right now now you know um it's really hard for if you have an institution or organization company it's really hard for the person at the center to have that much awareness of what's happening in the company to control it to any great extent um xiin ping has the same 10 to the 15 flops in his brain as any other Chinese person or any other person in general and um in the future you can imagine that look the thing at the center just has way more compute and it's not clear whether they think about is like more copies of AI xinping or AI P or something but um you you you could just have this like huge blob that's uh constantly it's like learning more things it's like writing every single press release that company releases it's reading every single poll request it's answering every single customer uh response um that didn't know I don't know if that at all helped answer the question yeah it's just it's a very weird question of like how this will actually play out will like like do we need to recreate the concept of like an individual brain and then copy pasted a billion times or do we just need one really big brain uh it's unclear to me I don't know Jord what do you got uh how did you feel on uh Wednesday the sort of like GI gibli moment a lot of people were saying oh I can't even go on X it's just all slop and and my takeaway was this is not slop this is beautiful this is like actually the most beautiful the timeline has ever looked and the most powerful thing about that moment it was the first time that you like I felt that the entire world could get consist consistently perfect outputs without any sort of like prompt engineering and basically just oneshotting these outputs in this sort of very scaled way are you are you um so to me I get very excited about it because it's like having any human be able to create beautiful images out of text is fantastic and I think we should be excited about that but what how how did you kind of react to some people saying like you know oh this is bad or or like I'm going to log off forever right you saw people that were just like okay like I'm going to delete my account just leave X now it's over how many times have those people said that exactly um I think that's a very zero some people were saying things like oh you're like eating up the like this F you know fossil fuel which is like our affection for gibli somehow by making these things I think it's just a very zero some view of the world where there can be a limited amount of beauty there can be a limited amount of Joy um I just don't think that and can can I be honest like the thing I I was feeling when all these gly images were coming out I became more of um I became more convinced of our glorious transhumanist future um where like look you're get you're getting a glimpse just from these like early images of how cool and beautiful the things AI makes or helps us make will be just imagine this scaled up like 100x 1 THX integrated into all our senses maybe even into our minds um integrated into the way we relate with the people we care about and so forth uh yeah I'm just like the future could be really beautiful I agree yeah I'm wearing like a VR goggles and it's making everyone beautiful it's really rose-colored glasses right yeah to begin with yeah yeah how how do you feel so Manis AI is apparently doing like a road show in the United States right now sort of raising uh for American uh VCS potentially I saw some people pushing back on the timeline saying like you know bad look to to any American VC that does that uh H how do you feel you know you know and the criticism would be like you know we're in this sort of AI you know Cold War uh American you know Venture Capital dollars shouldn't be sort of funding uh companies that are potentially competing with UA US AI Labs or application layer companies what's your sort of broad take on this sort of like cross border investment um in AI yeah it's a a um I was in China a few months earlier and it was really striking to me how dismayed the venture Capital system their felt and the tech ecosystem generally because after the 2021 crackdowns people are just like really pulled back and it sounds like after the Deep seek moment that that sort of changed at least in the AI uh at least in AI because now the state you know the city funds and whatever are more willing to push in um yeah and how how should people react to this I I'm up to Minds cuz one I do believe that there is like there could be an intelligence explosion and you really want to be ahead of that and you don't want to help them get ahead on that um so I think like the export controls and whatever are wise as for just Mana seems like in the Middle Ground here where I wouldn't want to just generally try to harm China by terrifing batteries or cars or something um this is an application of AI and it's complimentary to American AI Foundation Labs because they're using the Claud model right so um I honestly I don't have a strong thing what do you guys think I don't I I don't I'm exactly in the same position as you I don't have a you know uh if American you know I did find it a little bit weird that some American Venture capitalists were just like cheering on uh deep seek uh just like you know blanket statement like you know open source is good this is good when it felt like the way the launch was like rolled out and announced was done in a way to potentially sort of harm uh American financial markets but I mean my take has been that just just on a pure investment Finance level uh like investing in a Chinese company just can be difficult to get your money back at a certain point because the money just get kind of stranded there and then depending on who's in charge of America at the time it could be very onerous to bring that money back um but wait I want to stay on China real quick I I thought I mean fantastic piece I don't even know what you call a video essay essay but uh the one of the things that really stuck out to me as a Creator like yourself um was the was the lack of a Chinese Joe Rogan basically uh and and I was wondering like have you have you thought about that more have you unpacked that more like you would almost expected even if all the crazy censorship is true why is there no power power law winner and there's a Joe Rogan who just spouts propaganda constantly like what is going on that's driving the lack of these like longtail like you know world like country renowned country famous people yeah I feel like you guys might actually have a good persp on this because somebody might have said why doesn't Tech have their like jogan equivalent and you guys started your podcast Network and I think somebody could have said before you guys started your podcast Network why doesn't something like this uh exist already um uh I I I honestly I I I don't speak Chinese and I don't this is a secondhand stuff I heard in China I feel like reluctant to make Grand conclusions about Chinese culture based on like why don't they have a Jan and because like this is Chinese culture is like this or something um the sense I got was that they are more concern like whether it's young people or just whatever people want to consume is often more focused on practical matters um and if you listen to Joe Rogan it's very much like let's just shoot the [ __ ] about whatever uh I get the sense that it's like that is just not that uh interesting at least people I met yeah yeah it's it's less practical stuff I wonder if there's also an effect where um just some of the first social networks probably accelerated more quickly to these highly diffuse algorithmic driven Tik Tok feeds that allow for smaller micro celebrities essentially whereas we in America we've been building up this celebrity culture for so long that we have this more of a power law Dynamic I don't know I don't really have a thesis on it that's that that that that's super built out but it is fascinating yeah that's that's interesting that's interesting have you been there uh I I was there once on a layover but did live there for a while I uh uh in 2016 I worked at a uh I I went to Fudan uh which is like it was a little hch fun called high flyer yeah no big deal uh no I I just studied abroad there so I was there for a semester and and I I worked out of China accelerator which is like a CH startup accelerator uh so very very interesting experience that that I I don't I mean the scale is crazy I was in Guang Joo even just for a day and the scale of the buildings there just it really is remarkable like you need to see it in person because the pictures kind of compress everything you don't really understand until you're there the one the one thing that I found that was weirdly fascinating is I would I for some reason I don't know if it was the iPhone camera at that time but I found it very difficult to get high quality images because it was so polluted that like something about like your own eyes would be able to kind of like you know I don't know but I would take a picture and I'm like that's not what it looks like I and and uh I realized over time that it was just like certain areas were so heavily polluted that the iPhone camera would just kind of bug out but I had a question for you uh I you know something from this week that I thought was uh funny was that sort of like we were all kind of like holding our breath for the next like chat gbt moment and then it was also you know chat GPT uh with with their image generation product uh do you do you think we should be holding our breath for sort of another company to sort of experience um this sort of moment like that where it's just M full you know complete takeover of the Mind share right because um mind share is just so important right now the sort of benchmarks come out and everybody in our corner of the internet is like hyper fixated on it but the average consumer you know one of the things I thought was fascinating is you know John and I had a couple gibli posts each that sort of broke containment and a lot of people were quoting it and saying all right like tell me what this app is everybody like what's the joke like they just still didn't know like chat gbt or anything like that but yeah I think it would be amazing for the industry if another company could have you know a moment back that big uh do you see something like that happening you know this year um or have we sort of reached yeah even like the the agent stuff like flight booking you could see that like there's some application that would go real broad is there any like next uh next Milestone that you're waiting for if somebody did get a reliable agent to work I think that would just be like that would have a similar break the internet in a way that you personally could use and you could just like log in and um I think it'll probably be one of the foundation lab companies people have been for years trying to build agents um and they just haven't worked and it makes it it makes me think that that's a fundamental limitation of the current models and so it'll just be the company that is building a future model that is geared towards computer use and so forth um that is what I'd expected to be I mean I I was thinking the other day remember when Sam got fired and people were posting on Twitter oh there's something what did seear yeah yeah yeah um I I don't think that's why he got fired but it is notable that in retrospect if you were following the qar rumors you actually would have been in a good position to anticipate tot you know that there would be this like reasoning breakthrough that's kind of what they were talking about at the time similarly gbd 4.
5 and Gro 3 being not that much better uh if you were following Twitter and six months ago you would have seen oh pre-training my plateau but we'll have to go with inference scaling or something y so maybe my update has been that you can sort of know what's going to happen I mean I remember at the time I was just like ah these these idiots on Twitter are just like they don't know what they're talking about like a rumor mill and R I'm like yeah kind of like I mean you know you like take it with a great of Sol but they kind of had the big picture there's like no secrets yeah there was always this funny Dynamic where people were criticizing Sam for launching chat GPT without telling the board but then at the same time people were criticizing Sam for like being non-technical and not driving the product forward and I was like no matter what you think of Sam like those two things cannot be true simultaneously like you have to pick a side here you can't criticize them for both like not innovating and also innovating too fast um but anyway um I want to talk about uh like a like a flip side of the P Doom argument that I've been kicking around um basically like we've seen these like accelerating Trends before nuclear energy energy to too cheap to meter has been talked about before and we have hit stagnation our society and all eom align to say hey you know what nuclear energy is not going to double every couple years and it didn't and I'm wondering like what is your P stagnation like just your probability that something happens maybe people freak out maybe there's just you know one world government or something but we actually see AI stall for a significant amount of time like 50 years there's no intelligence explosion purely for stagnation reasons do you think that's a possibility like 10 20% oh um I think there is a dynamic I talked about earlier where we in the past we have underestimated how much it takes to make a coherent intelligence that has agency and so forth right um that could be part of it another is that there is no intelligence explosion so sorry I mean the the most important thing here is look we can keep increasing compute that we're putting into these systems for maybe the next 5 10 years um because Compu is just growing at this ridiculous rate where in in three years we're going to have 10x the amount of global AI compute that we have right now um but at some point right now we're spending 2% of GDP on compute and data centers and stuff like that you can't just keep like 10 Xing that forever um so if somehow this whole deep learning paradigm is wrong and we just like totally miss the boat somehow then I can happening that's I give it to 10 20% otherwise if we do get AGI I'm of the opinion that we're not it would just be so hard to contain it like it's an incredibly powerful technology even if there's no intelligence explosion even if it doesn't help you make an ASI or something just AGI alone sure is like would just make the economy explode and all kinds of crazy [ __ ] I I mean on the on the like there's there's a little bit of a force of d a like the GDP question but also just I've had this idea that like no matter how intelligent you are you can't break the laws of physics at a certain point you need to like get the sand out of the ground and turn it into silicon and like at a certain point just moving the sand around fast enough even at light speed you're not 10 Xing every two years and so it feels like there could be a slowing down even as we're having the robots do basically everything it's like the robots are still maxed out by physics I don't know I was thinking about this this morning actually um and the intuition I was uh thinking about is so since 1750s we've had 2% economic growth in the world before that it was like a tenth of that right 0 2% if you were around in the 1500s or 1,000 and somebody said there there would be like 2% growth I think you might given your reference class have been like look it just takes a long time to learn how to um artificially select crops and how to build like new structures and aqueducts and whatever it just that that like that is a process that takes a while so why do you think you're just going to be going through like increasing that 2 3% a year um and in reg it is like a really weird you look at the last 100 years of History um we're discovering all this new things in uh physics and chemistry and so forth last 50 years we're like we start with the transistor and now we're talking on this magical screen um and that was just like physics didn't bottleneck that I think like you get another 10x and I don't see any in principle reason why at the next 10x to physics is like just would not allow the robots to move fast enough yeah I mean certainly on the GDP question I I I think the energy question is like maybe a little bit murkier but then there's probably other ways to optimize and and still get those GDP lifts even with uh even with energy growing at like a more reasonable less explosive rate so I I think I agree with you there uh I'm sure you've talked about this in other interviews and and with some of the the individuals that are sort of leading initiatives at these companies but uh what's your sort of broad take on on Apple's position and how they've been approaching uh everything in AI you know it's sort of like I I've seen uh you know they've sort of LED with like gen mooji almost as much as they've LED with like you know uh everything all the potential of what you would want out of sort of AI assistant um but how do you think these uh you know companies like Apple and Google sort of like figure out product development and like proper distribution of these products because I feel like that's been the big critique uh it's just like they have every every possible Advantage such talented uh you know team members and it has to be so frustrating it should be a sustaining advantage or sustaining Innovation if you're looking at the innovator dilemma framework and yet it feels like it might wind up being disruptive I don't know yeah um they're not AI failed you know like if you treat it like another feature uh well I mean even if you treat it like another feature it's like mysterious why Siri doesn't work on my phone but um uh but like it's like it's like more people basically and if you take that seriously you're not just going to be like oh and the 25th Department in our complex is about uh making Siri better at speaking or something no it's like this is this is the future yeah and then that's is that the that that then becomes like the giga case for safe super intelligence meaning like none of the features or like consumer applications really matter at all today and you shouldn't even release them and you should just sort of accelerate towards the the end goal that enables all the other goals that's that's an interesting point um I think maybe somewhere in between where like if you didn't release chat gbt you wouldn't have been able to just like know that this is a feature people really wanted and that they would get a lot of use of as compared to the other things people were using GPD 3.
5 to do um and I wonder if other things that other like features of AGI will be similar where if you don't um if you don't deploy it to a bunch of Engineers on cursor you just like won't know what would actually make something a good coding bot the counter argument to this and I think what the SSI people would say is that they actually are deploying but they're deploying towards the one thing they care about which is accelerating AI research and they don't need to do that externally they can just do that internally um and so the basic question is can you get this like closed loop where you build the AIS which are helping you accelerate AI research dot dot dot super intelligence um I'm like 50-50 on that question um so but that other 50% is like a big deal yeah you you mentioned AGI pilled is there a difference between AGI pilled and Asi pilled and why do open AI co-founders seem incapable of starting anything but an a foundation model company yeah I always wondered like I just want one of them to be like yeah actually I'm starting a travel company but it seems to be like going to be traveling a lot after their JS are they only know one thing like they're all one trick ponies I mean I love them all but like it's just funny that none of them started anything else it's the only thing that matters maybe yeah I mean I think you're right I think um some people I wouldn't even put it as ESI because even if you don't believe in this like Godlike intelligence that going to control the world I'm not sure I believe it either MH I think um there's AGI pill and there's like transformative AI pill where you say look even if they're just like humans if they have the advantages that AIS will just intrinsically have because of the fact that they're digital which is the fact that they can be copied with all of their bra all of their knowledge right so think of the most skilled engineer in your company like Jeff Dean or alas you can like copy that person with all their task and knowledge and everything you can merge different copies and scale and distill distill agis those advantages alone and the fact that there will be billions of copies as we increase the amount of computer in the world that alone is enough for transformation in the sense of going from you know like what we were like before the Industrial Revolution to the Industrial Revolution pace of growth and so I think somebody can be AG pill in the sense they say like yeah I expect like human level intelligence to emerge in the next 10 years but they still don't take that seriously as in like okay well what does that imply about what is happening through the economy does that just mean like oh you've got a smart personal assistant or does it mean like no we're in we're in a very different growth regime last question two last questions I guess uh for one question so are there any people in the book that you feel like uh in the fullness of time uh are are sort of very under underhyped or not getting enough attention people that are unsung heroes that maybe don't post a lot on X today but when we look back you know 15 years from now we'll be like you know those were the people that were doing because there's this this weird phenomena right now where like if you're just loud on the internet like you just like you know you get you suck mind share and attention and maybe there's somebody like a building over that's doing more impactful work or or really at the Forefront that isn't posting at all because they're actually on to something that's a really good question um I think a lot of the people I've interviewed have subsequently or you know at the same time like are well known right so um even if you're not a lab CEO so if you're like a Leupold or you're a scho or a Trenton people know who you are on Twitter as well the person who I think might be underrated still is an interview I did that we only released in the book so we have two interviews that we kept for the book um one of them is AA kotra and she is somebody who has been doing this really interesting um like since the 2010s these really interesting analyses of how much compute did Evolution spend in total in order to like over the billion ions of years Evolutions been going like how do we model that as a computational like pathfinding exercise and using that as an upper bound on how long it will take to build Ai and then like how much computers a human brain use how much how much like time do we spend learning as kids and how much Compu is that to in total compared to how long it takes to train these models what does that teach us about how much better these models could get given this overhang um that has informed I think a lot of oh [ __ ] sorry that this is the wrong answer although ja is excellent and she like she's also underrated the um the one who's like also super super underrated is uh Carl schan okay and I think I don't know if this ring name rings a bell to you this man is like you would not believe the amount of ideas that are out there in the AI system from like the software only Singularity basically the intelligence explosion kind of stuff to these like transformative Ai and modeling out the economics of this new growth regime to like so much more it's like this one guy it all came from him he doesn't like to write that much so he um he like tells other people his ideas I had him on my podcast and we put his stuff in there and he what like he just has all these Galaxy brain takes like one of them was he looked at the research on what changed between chimpanzee brains and human brains and he's like oh there's like the there's a a bunch of structural similarities it's just that the human brain is bigger So This lends Credence to the scaling hypothesis all kind of gyin stuff that's amazing I have one more question then and then we'll let you go by the way uh X is down I think you broke it or we broke it together we broke it but uh but yeah we're still live on um but uh how how do you think about uh more on like the business side because I've just I think everybody's been fascinated with your journey when when you started your podcast everybody would have said like there's enough podcasts like we don't need you know we just don't need more of them that that clearly is not true there's plenty of white space and your growth is a proof of that but how do you think about uh value capture and what you're doing because I'm sure you've had people come to you and say like heyy look like you know just just come keep doing the show but we're gonna give you $50 million of of sort of like shares and you don't have nobody no nobody has coming to me with that well well they should I think um but yeah like how do you think about kind of um I think there's this General fear in the Tech Community right now where it's like this is the last two to three years where you can like you know accumulate wealth and then like it's over and I don't believe that that's true but how do you sort of balance like you know you're doing something you love all day long which is just like talking to interesting people and like thinking about the future and you know Humanity's potential and technology and all this stuff but you know how do you balance how do you balance all of that that fear and and you know wanting to capture value from your work uh but also wanting to not be conflicted right and being able to just be sort of this independent um actor yep um I actually am very curious about the answer for you guys because even though you're the the sort of the the network you're starting now it like just it started with with a much bigger bang than my podcast are started out with so I I assume you actually got a bunch of these kinds of offers right as you're like oh this is like new and exciting yeah we care we care a lot about you know being like Switzerland but specifically from like the investor the investor side right like longterm we want to have any sort of investor be able to come on the show and talk about what they're doing and I would imagine the same thing for you like you don't want to be like you know so tied in or have uh to to one you know Foundation model you know company that you can't talk about the incredible things that someone else is doing right yeah yeah um I have had like different podcast networks or whatever reach out to me in the past and I've seriously considered them um in one case I was like close to saying yes and in retrospect it was like this was many many years ago before like the podcast had grown that much at all and it was like we'll give you um we'll edit the show for you and we'll produce it for you um and all we ask is 50% of the revenue you earn through the future [Laughter] um you're 50% of your lifetime earnings 5050 um but I uh I had a couple of friends who uh I was like they were like dude it's like working just do it yourself and I'm glad because also another thing that might have influenced your decision as well is Talent is so key um talent in the sense of I think it really matters to have one or in this case two in your case two people who are like we care this is our vision and we are going to instill it rather than this is an institution where I'm the face of it but secondly what I was going for is like Talent as um your editors your um other people on your team for me I've just been super delighted with the people I get to work with there and it just like the care and attention to detail they have just would not be replicated with here's a team of editors of this podcast Network it's like instead of the people I've sought out and I love working with and I give them detailed feedback and they give me detail feedback and you know yeah that that that that's that's also what makes it special yeah to the specifics around do you think the the general uh techn you know uh techn capitalist fear of you know like you know basically like uh I think a lot of you know 22 year olds right now are coming in to their sort of careers and they're saying like Well everybody's going to be paper clipped in a few years like you've got to sort of like create value and capture it now so that you're okay in the sort of like AGI future I do feel like that's maybe sort of common fear throughout history right where there's like people have this sort of pending sense of Doom uh sometimes but do you think that's like you know uh what would you say to somebody that that had that sense um I I think like I think the way to model out the next few years from a career trajectory is you'll just have 100x to leverage but um you wanted to be in a position where you can use that leverage there's a common thing and I'm sure you experien this now as well is that as you advance in your career before you're like I've got a bunch of time but I don't know what to work on and after you're further along you're like I have no time but there's like a thousand different ideas I have for um things that would be super valuable or I think would go really well or something so that I think what you should do is just get to a point where in whatever you think is interesting or care about you're at the frontier and can see what the problem space actually looks like if you care about I would really recommend moving to SF and then just start working on problems and um you know use the leverage that AI gives you and if we end up getting paper clipped it's like look what's the point of like you personally what's the point of that you like not doing anything worrying about that in the 80% of Worlds or 90% of Worlds where we don't get paper Clift um you will get to say you worked on something really cool at a time that was really important in the history of humanity that's great well said thanks for coming on this is fantastic this is fun guys we got to have you back this is really really awesome make it a regular thing I enjoy and super bisher on you guys I mean like uh you're like well your previous thing is a couple months in but this is like a couple weeks in and you're already killing it that's that's awesome thank you so much we really appreciate you coming on and for for for the record uh we're not building a network it's sort it's actually a head fake it's a head fake uh it's just a show so it's just a show forever we'll never we'll never come to you and say give us half for for editing you know uh you want at least 75% exactly uh great having you on great you excited to get you know everybody to get access to the book yeah go get the book I I printed it out buy the actual thing yeah so strip press the scaling era is here cheers thank you guys so much for having me on bye see yeah by and we got Casey hmer coming in we kept him waiting sorry about that Casey hopefully you're still here with us because we want to hear about terraform we want to have you seen his office he builds from a castle in Burbank it's it's like some guy built this these Crazy Castles uh it's a fascinating company uh boom hey Casey how you doing hello very well can you hear me yeah we