Recursive Intelligence emerges from stealth with $650M and a team of AI luminaries to build recursive self-improving AI
May 13, 2026 · Full transcript · This transcript is auto-generated and may contain errors.
Featuring Richard Socher
Speaker 4: Good to see you, boys.
Speaker 3: Good to
Speaker 1: have you in person. We'll talk to you soon, darling. Have a
Speaker 3: great Say hi to the team.
Speaker 1: Today. We have Richard Socher from Recursive in the waiting room. We've been keeping him waiting, so we will bring him in to the TBPN Ultra Dome. I love when companies emerge from stealth. Nothing makes me more frustrated than them. We've talked to Richard before.
Speaker 2: So Yes. Welcome back.
Speaker 1: But we're very excited
Speaker 2: about We've talked to you before. Yes. Was that am I that was a bad joke. You.com. Right?
Speaker 1: Oh, that's right. That's right. Yes.
Speaker 2: Great to
Speaker 1: see you, Richard. Let's but but even though you've been on the show before, let's reintroduce the the the the company and and sort of set the table for us on what you're building.
Speaker 11: To. Yeah. So still happily running u.com and AIX Ventures, but now very excited to start Recursive with seven incredible cofounders. At Recursive, we're looking to build Recursive self improving superintelligence to automate knowledge discovery. Yeah. It's a lot to unpack there. You know? We're getting closer and closer to AGI. I think we just have to set our sights set our goals now to to superintelligence. I think AI is code, and AI can code. So now we can actually put those two together and allow in an open ended fashion to let AI actually improve itself.
Speaker 1: Yeah. Are there any other is is the entire focus on AI research? Or are there any other areas of research that you're interested in? You see a lot of labs do research on cybersecurity or research on bio or science. There's a whole bunch of different initiatives. How focused do you want to be on AI research specifically?
Speaker 2: Yeah. It part of that part of that focus just if you can get RSI actually going, then everything else you can enter Potentially. Yeah. Seemingly very easily.
Speaker 3: There you
Speaker 11: go. You you kind of answered the question. That's exactly right. I think for now as a a company, you wanna have a very strong laser focus, and that laser focus for us is on enabling AI to automate the scientific method on itself. The ideation, implementation, validation of ideas. Right? And once it's gotten really, really good at that, that means it can be fully recursive self improving, then we're going to apply it to a whole host of other applications first in software and, you know, broadly construed digital work, eventually even physical work. And I saw your previous, show just now. I'm very excited about preclinical biology and the impact AI can have, on that.
Speaker 1: Talk about the round, the structure, who's in, how much did you raise?
Speaker 11: We raised north of, 650,000,000.
Speaker 7: We can pay for you.
Speaker 1: There we go.
Speaker 2: What, walk us through the pitch that you gave investors. What's your what's your right to win? This you're one of, you know, many many teams working on this problem, including all the existing labs. But I'm sure that the pitch was compelling. Otherwise, you wouldn't have gotten a $650,000,000.
Speaker 11: That's right. Yeah. At a 4,000,000,000 pre as led by Google Ventures, the Greycroft, some amazing participation also from NVIDIA
Speaker 6: Yeah.
Speaker 11: AMD, and and many other incredible funds that we're very grateful for, Angel's, Strategix, and so on. Our right to win is essentially fourfold. It's the focus. Yes. We know that everyone works a little bit
Speaker 1: Yeah.
Speaker 11: On self improvement, auto research, things like that, but no one is fully focused on this. We have the team. We have incredible cofounders, Tim Rockteschel, who invented Retreat Augmented Generation with others and rainbow teaming and prompt reader and Genie three, which is, like, the most sophisticated world model that Google DeepMind had recently launched. Like, some incredible talent. We got Jeff Clune who has worked together with Tim R on many things. He's done incredible papers like the Darwin Godel machine, HyperAgents with some of his students who are also with us. We got Tim Shih who built Cresta into Unicorn, AI for for service centers. We got Josh Tobin who's one of the earliest people at OpenAI and eventually led some of their work on ChachiPN agents, Codex, Deep Research. We got Samin Shiung, co inventor of prompt engineering. We got Alexey Dostoevitsky, co inventor of the vision transformer. We got Yuan Dong Chen, who used to lead RL at Meta. Mean, this is such an incredible team. Great.
Speaker 2: Yeah. That's that's pretty compelling.
Speaker 1: Then then how are you thinking about scale being all you need or not? Is is there a capital war or is $650,000,000 enough to do the level of training or inference that you need to actually reach RSI? Because if a lot of the labs at least are sort of, you know, putting the AGI, ASI terms in orders of magnitude of compute or orders of magnitude of dollars. And so is there some sort of race condition? Or do you think that there's a more like elegant solution? We need new algorithms, I guess.
Speaker 11: It's it's a great question. Certainly, 650 in this league that we're now in is step one. Yeah. If you wanna compete at that level, you wanna build massive frontier AI, you have to eventually raise more. At the same time, you know, this will last us for quite some time
Speaker 1: Yeah.
Speaker 11: Unless we're seeing enough progress that makes us want to accelerate this. Right? And then we prove a new kind of scaling law
Speaker 3: Sure.
Speaker 11: Where more compute results and more inventions, more capabilities in AI, more automation in the process of AI research itself, then we're probably going to accelerate that. And to be honest, the team has already made some incredible meta inventions that led the AI to make its own inventions. And so we're really excited about the progress and, like, they will accelerate.
Speaker 2: Yeah. Do you do you view today's coding agents as somewhat of a form of RSI, or is that the wrong, you know, kind of wrong category?
Speaker 11: I'm glad you asked that question. The current models are great for auto research, which is a step towards RSI, but you're not doing true recursive self improvement if you don't have full control over the entire self, namely the model, the architecture, the infrastructure, its entire harness, and all of the things that go into the training, pre, mid, post training, all of these things. And so they are it's a useful step to use coding agents for auto research, but it's just a step towards the true RSA vision.
Speaker 2: Got it.
Speaker 1: How are you thinking about standing on the shoulders of giants broadly? I mean, you mentioned a bunch of researchers who have published a bunch of papers, and obviously, you're integrating that. But is there a way to pull forward progress or like catch up to the frontier faster based on open source models, open source research, open source data sets or distilling or integrating existing open source projects? I would love to know just your thesis on leveraging open source and tweaking and fine tuning and iterating on versus start from a completely blank sheet?
Speaker 11: Great question. Yeah. I think as with many things and similar to the applications of RSI, you wanna start with a laser focus and then open up the aperture
Speaker 8: Mhmm.
Speaker 11: To more and more things. And 100%, you wanna stand on the shoulders of giants, both, like, closed, open source. Open source has
Speaker 9: done an
Speaker 11: incredible job bringing in and and more capabilities to, you know, the the community. And so we will we'll use anything and everything that we can, but we also know that long term, you're gonna wanna own and be knowledgeable and able to build the entire stack.
Speaker 2: Yeah. Now that you have your own funded NeoLab, do we need more new NeoLabs or do we have enough?
Speaker 11: I'd like to think
Speaker 13: so one, we want to
Speaker 11: build a real company. Right? We're not we're not just a lab. I sometimes struggle a little bit with that terminology. I think there are a couple of labs that are truly just labs and it's a little bit unclear where they're going to go long term. We have folks that also build unicorn companies from scratch with revenue and and so on, have launched amazing project products like an OpenAI, a codex with Josh and others. So we're we're excited. But at the same time, this machine we're building is almost like a Eureka machine in the sense of, like, it'll keep making inventions for you. And so a lot of labs that are focused on, like, continual learning and world models and maybe memory and all of these other aspects. I believe those are all important aspects. I believe that if we were successful, those are all just special cases and outputs of this machinery that we're building.
Speaker 1: So do you I mean, how close are you? I imagine that this is a whole process. But how close are you to thinking about a product for businesses or a product for consumers, an API, a web playground, a demo? Is that something that, by design you want to be super fast about and get something out into the world, have some impact in the interim? Or do you want to keep everything closely held for potentially a really long time? I don't know. Forever potentially if you're just gonna
Speaker 2: Yeah. When I hear the eureka machine, I'm not I'm keeping that to myself personally.
Speaker 1: Yeah. What's up?
Speaker 11: Initially initially, we thought about, like, really giving ourselves, like, a year or more of time. Mhmm. In the last few months, the team has already made some incredible breakthroughs, and so we may accelerate some of the productization of this research to to come out earlier. I don't wanna commit yet to to specific, date or anything, but, like, I think, you know, we wanna build a viable company here, that brings superintelligence and allows it to have massive impact, to help humanity flourish. And that won't happen in just the research lab.
Speaker 2: I love it. Could you ever see a scenario where you create something that would make more sense to spin off into another company to productize? Or do you expect to, you know, basically use all of your creations internally and productize them yourself?
Speaker 11: I can see how our customers will use this in incredible ways so that they can then use like, create new kinds of products and new kinds of product categories. I think we've seen similarly incredible new companies, like, that that have used AI and code generation to enable others to build their own businesses. You know, Replit, Lovable, and so so on are great examples of that. And I can see our technology doing something similar at an even more, foundational level.
Speaker 1: Amazing. Thank you so much for coming on the show. Congratulations on the progress.