Ricursive Intelligence raises $35M from Sequoia to use AI for chip design and close the hardware-model co-optimization gap
Dec 19, 2025 · Full transcript · This transcript is auto-generated and may contain errors.
Featuring Anna Goldie
Speaker 2: Welcome to the to the show.
Speaker 9: Thank you. Thanks very I'm excited to be here.
Speaker 2: Thanks so much for hopping on. I'd love to start with a little bit of your background. There's a whole bunch of interesting milestones here. Would you mind introducing yourself since it's the first time on the show?
Speaker 9: Yeah. Sure. Happy to. I guess we could go way back. Oh, sure.
Speaker 6: I was
Speaker 9: a I I studied computer science and linguistics at MIT, and I did my PhD at Stanford in, like, Success. AI learning. And actually, my first job, I worked at TripAdvisor on the China team. So I did my did full stack web development in Chinese. Yeah.
Speaker 3: Wow. In Chinese. Crazy. That's
Speaker 2: it's like
Speaker 3: I guess, we'll start with the easy stuff. Put together. Now now
Speaker 9: Yeah. We can do this segment in Chinese if you want.
Speaker 2: I would be lost, actually. I took one semester of Chinese and I was terrible at it. I only knew how to ask if you want a coffee.
Speaker 3: I'm gonna test your Chinese because I don't want you to test my Chinese. Do
Speaker 2: you know what that means?
Speaker 9: Yeah. You like beer. You really like beer.
Speaker 3: Nailed it.
Speaker 9: Nailed it.
Speaker 2: Perfect. Yes. Yes. That phrase alone
Speaker 3: That could take you anywhere. Anywhere. In China.
Speaker 2: Anywhere in the world, potentially. So, yeah, it it take me through some of the the the first interactions with artificial intelligence, AI teams, chip design, any of that. Like, how how did you go from I mean, Tripadvisor, I don't think they've baked it onto an ASIC yet, maybe in the future. But, how did you get into AI?
Speaker 9: I guess, like, the reason I went into computer science is because I wanted to work on AI. Like, when I was in high school, I had no idea what I wanted to do when I grew up. And then I heard this lunch lecture at MIT about, like, computer systems, like, that could understand and generate human language, like, in 2004. And then, like, that's why I, like, went to MIT to study computer science, and that's what I've been working for since then. I joined that professor's lab, actually, at MIT.
Speaker 2: Oh, no way. Very cool.
Speaker 9: Open language systems group. Yeah.
Speaker 2: That's great. And and then, yeah, what were you doing right before founding the company?
Speaker 9: So I guess I can yeah. I joined Google in 2013, Google Research. Was working on, like, language modeling.
Speaker 2: Yeah.
Speaker 9: And then I joined Google Brain in 2016. I started a team there with Azaliy Amir Hosseini on, like, machine learning for systems. Like, how can we use AI to design better computers chips and computer systems?
Speaker 2: Okay.
Speaker 9: Because our our reasoning was that, you know, chips are the fuel for AI. Yeah. And so if we could use AI to sort of advance the state of computing, we could kind of, like, close this recursive loop.
Speaker 2: Mhmm.
Speaker 9: And we did a variety of projects like Alpha Chip there.
Speaker 2: Foreshadowing, the name of the company. I don't know if you wanna jump ahead, but but take me through the rest of the career.
Speaker 9: So, yeah, we we also worked at Anthropics. I was an early employee there. Sure. I had the privilege to like, I'm joining, like, before ChattyPT and Claude were released even. Wow. So I got to work on, like, RL post training, cogeneration. It was, like, an amazing experience. And I see that
Speaker 2: How is the team thinking about I mean, how are you at that point in time, pre ChatGPT, how were you thinking about custom chip development, how important that would be how how important that would become, how much flexibility you would need in the chip architecture to kind of advance the research progress before, like, actually committing to a particular pathway.
Speaker 9: I guess, like, maybe part of some background here is Yeah. That it it takes two to three years now to design like a like a chip, like a TPU that's very complex. Yeah. So when you're designing that chip, you kind of have to predict, like, what AI models or workloads will be prevalent in two to three years. So we can't really do that because the technology is advancing so quickly. So in practice, you're kind of designing chips for current models Yeah. And you're leaving a ton of performance on the table. Yeah. Like, team, we ran some experiments where we were designing, like, hypothetical accelerators Yeah. For particular machine learning models. And, like, you could get, like, almost like a 10 x improvement in perfect total cost of ownership by doing even naive customization of the chip with the model and, like, not even being able to change the model.
Speaker 2: There a
Speaker 3: little Yeah. Bit of a
Speaker 2: is there a little bit of a, like, shoot
Speaker 1: for
Speaker 2: the moon, you'll land among the stars effect going on right now? Because I know that there were a number of companies that they did exactly that. They tried to predict, okay. I think that we're gonna need a ton of memory directly on the chip for this design or we need to go wafer scale or we need to do something else. And and they maybe didn't pan out to be the the dominant form factor. Factor. But then and and I and I at least the narrative has been like, oh, those those companies are kind of written off. And then I'll talk to some lab, they're like, well, we actually found an amazing use for that particular thing, and we baked this model down. Now we're using a ton of that stuff. And so it feels like these ASICs, like, is the correct framing that it is important to get it right in the real you want to land on the moon, but there are sometimes are uses for chips that have been designed with they didn't quite land exactly where the research direction went, but it's still useful in a niche capability.
Speaker 9: Yeah. I think that there are, like, landings for some of these specialized bets. Yeah. I would that, you know, part of the reason that, you know, we're so excited about this company, Recursive, and, like, shorten the timeline is I think we can enable, like, many, many more of these bets to to really land. Yeah. There's a huge space of chips that could exist and maybe should.
Speaker 2: Yeah. Yeah. That well, that's a good place to jump into the current the current business. I'd love for you to introduce it, formally in terms of how how you're framing the the the opportunity.
Speaker 9: Okay. For the company?
Speaker 2: Yeah. For Recursive.
Speaker 9: Yeah. Yeah. Recursive. So we're AI for tip design and tip design for AI.
Speaker 4: Mhmm.
Speaker 9: I guess we see the company in three phases. Yeah. So I could describe those.
Speaker 3: Please. Great.
Speaker 9: Yeah. So I guess first phase, let's accelerate the chip design process. Let's take the long poles on. So Mhmm. Physical design, for example, designing the layout of the chip given fixed logic, that can take up to a year for a chip like a TPU. And then design verification. So basically verifying that the high level specification is correctly implemented in the RTL code. That's also another long pole. So in this phase, like, we can help chip design companies, like, get to market much faster. Like, maybe it doesn't need to take two to three years. But if you could do it in in phase two, though, we would be like to go end to end. Given a machine learning model or a set of machine learning models or other workloads, can we design, like, the computer architecture and, you know, design the chip all the way to DDS two, which is a format you've sent to TSMC for manufacturing? In that in that case, we could help many more companies design custom chips for their particular workloads. Even if they don't
Speaker 3: Maybe on that point, do you know how many, like, customers TSMC has today versus how many you think they'll have in the future?
Speaker 2: This is exactly my question. Like, how many how many customers are there?
Speaker 3: We see it 100 x ing
Speaker 2: Yeah.
Speaker 3: 10 x ing.
Speaker 2: Because we really only hear about, like, three most of the time. Like, the news headlines are training in Yeah. GPU and and NVIDIA GPUs. But I imagine that there's a ton more now, but it also feels like you're predicting and your company's sort of a bet on, like, a Cambrian explosion. Is that roughly correct?
Speaker 9: Yeah. That's right. Mhmm. We think that there can that there are companies that have workloads that they're serving at massive scale. Like, this year, the AI inference market is a $100,000,000,000, but it's, like, rapidly growing. We think AI is gonna be everywhere in embedded devices and also in data centers. Yeah. And if it didn't take two to three years and if a company didn't need teams of hundreds or thousands of human experts to design their own chips, then we could massively expand the the market here. Yeah. It's pretty interesting.
Speaker 2: Heard an anecdote. I don't remember which company it was, but there there was a cloud cloud hosting, like a database company that was shifting their database workloads, not AI workloads, database workloads to GPUs to accelerate them just to speed them up. And so across the stack, every piece of software, there's always an incentive to just push to a more, like, guess, electricity efficient or just just more cost effective, you know, hardware at some point.
Speaker 9: That's right. Because, like, electricity or power consumption dominates the cost of of running things on on chips. I I guess that's why I brought it up earlier that, you know, we had run some just very initial experiments and you really could get way better power efficiency by cut using custom hardware. The GPUs are amazing, but they're pretty general purpose. They were developed for graphics processing, so it seems very surprising that they would be the best fit for AI models today. And I don't think so.
Speaker 2: So if you're a, you know, database company or or, you know, any piece of software that then is being transformed by AI, and then in the future, you might be transformed by by custom silicon or custom silicon's in the road map, and it's maybe getting closer, How much does it cost today to develop custom silicon, a custom chip, work with TSMC? And then where do you see that sort of going over the next few years?
Speaker 9: I mean, it's extremely expensive design design a chip, both in terms of, like, labor
Speaker 2: Thousands of dollars, tens of thousands of dollars. Like, I I Like,
Speaker 9: it would be more than that.
Speaker 2: Like, to save up for this for weeks? It's like hundreds of millions. Right?
Speaker 9: Yeah. Hundreds of millions.
Speaker 3: Okay. That's a lot.
Speaker 2: Yeah. Yeah. Mean, certainly not something that even even a unicorn software company would maybe not be able to marshal the capital for that just at the drop of a hat, especially if there's risk involved. Right?
Speaker 9: Exactly. Mhmm. Also, it's just the timelines here. It's two to three years potentially. Sure. Like, for a complex chip. Mhmm. And, like, you have to build out that in house expertise. Yeah. And there's risk. Like, maybe you just won't ever be able to close timing or power and you just can't build the chip.
Speaker 1: Yeah. Can
Speaker 3: somewhat random, but I wanted your take on the Reuters reporting how China built its Manhattan project Mhmm. To rival the West in AI chips. They allegedly have built some type of EUV prototype. How real is that? I never I never know if if it's hard to know what's what's real, what's what's propaganda or what's what's actually a scoop.
Speaker 9: I I think I and although I speak Chinese, I don't have any special insights here into whether that news reporting is true or not. Certainly interesting.
Speaker 3: Yeah. That's fair.
Speaker 2: Well, then then take us through the
Speaker 3: the It's a shame you're not a venture capitalist because if you were, you'd give, like, a very good
Speaker 2: You'd be
Speaker 3: like, actually, I know everything about this.
Speaker 2: But speaking of venture capital, you raised some venture capital. Can you take us through the funding history of the company, what the news is, what the most recent round came how it came together?
Speaker 9: Yeah. I mean, we feel so lucky to be working with a set of investors that we feel like it really aligned with us on the mission. So we raised, $35,000,000, led by Sequoia.
Speaker 2: There you go.
Speaker 3: And explain Sequoia for okay. That's great. Yeah. How how do you how how pleased are you with AI progress this year? You've been in the industry and basically seen it all at this point. Would did you think we'd be farther along?
Speaker 2: Do you agree with the conception that we're in an age of research that there will be sort of a like a plateauing of the current models or maybe more smaller models or more fine tuned models? Like, how are you seeing just the overall model wars playing out?
Speaker 9: I guess, I actually, my cofounder, Azaliya, had a very interesting report about, like, the state of models and, like, the niche that there is small models. I would recommend you guys checking it out.
Speaker 2: Sounds good.
Speaker 9: I guess to from my perspective, I feel like there's these tough frontier labs and then there are these open source and, like, model labs. Yeah. And I feel like the frontier labs, they kind of are all neck and neck. Yeah. I would say, like, Gemini is has an edge right now because of this kind of co optimization of TP with the Gemini model. So, like, they're kind of pushing this credo optimal curve of, like, capability versus cost. And I think they have an edge there. But to some extent, on the algorithmic side, you know, they all everyone kinda comes up with the same ideas roughly around the same time. And to maybe to some extent, people talk to each other and there's that part too. Whereas I think that hardware is a real edge here. So I think the labs that have, like, hardware co optimized with their models are gonna win in the long term. But maybe I'm biased.
Speaker 2: No. I mean, I I That's fair.
Speaker 3: I mean, otherwise, why build the company?
Speaker 2: Vertically yeah. No. I think it's a fantastic thesis. The vertical integration story at Google makes a ton of sense. You worked on a team. You saw it play out, and then you're like, maybe other folks wanna do the same thing. I'm gonna build a company. It's like the the oldest story for why to start a company. It makes a ton of sense. It seems like a lot of work, so we'll let you get back to it. But thank you so much Yeah. Super fun. Coming on the show and hanging out with us and explaining all of this.
Speaker 3: We'll we'll talk more next year, I'm sure.
Speaker 2: Have a great rest
Speaker 9: of your day. Well, bye.
Speaker 2: Happy holidays. Bye. Profound. Get your brand mentioned in chat, GPT. Reach millions of consumers who use AI to discover new products and brands. Before we bring in our next guest, let me also tell you about Fin dot ai, the number one AI agent for customer service, automate the most complex customer service queries on every channel. Peter Thiel is in the news again. People are, you know, trying to storytell around how he's making so much money off of SpaceX. There's there's two competing narratives. One is that he he when he he, you know, was nice to Elon Musk and then was able to invest, Elon said, was CEO, and Peter reported to me, so he couldn't fire me.