Sara Hooker's Adaption Labs raises $50M to build AI that learns continuously without retraining
Feb 9, 2026 · Full transcript · This transcript is auto-generated and may contain errors.
Featuring Sara Hooker
Speaker 1: Yeah. Something that you know, so anyways, this this Yeah. And and extremely scalable, obviously. So I'll be interested to track this as they I'm assuming every single MrBeast video will have have a
Speaker 2: Call to action?
Speaker 1: Call to action
Speaker 2: Probably.
Speaker 1: Very, very shortly.
Speaker 2: It's gonna be good. Well, we have Sarah Hooker in the restream waiting room from Adaption Labs. Welcome to the show, Sarah. How are you doing? Good evening.
Speaker 10: Hello. It's lovely to be here. How are you?
Speaker 1: Great to have you. Love the I love the background. It looks like a beautiful day.
Speaker 2: Are you
Speaker 1: in San Francisco?
Speaker 2: Are you outside in San Francisco?
Speaker 10: I'm not. But I'm surrounded by glass, so it gives a good illusion. This is the outside.
Speaker 2: Okay. Okay.
Speaker 10: See how beautiful it is, this is this is inward looking, but just high high reflection density.
Speaker 2: Amazing. First time on the show, so please introduce yourself and the company.
Speaker 10: Fun. So I did wanna say I was trying to this might, as a site aside before we get started, but I was trying to describe to an AI researcher what this show was over the weekend.
Speaker 2: Yes.
Speaker 10: And what was funny was, I was saying it's kind of like CNN New Year's Eve show
Speaker 13: Oh, sure.
Speaker 2: Like every day. Every day.
Speaker 1: Every day. Yeah. Incredible. Dramatic. Yeah.
Speaker 2: Yeah. We should
Speaker 1: go we should go watch all of
Speaker 2: should have a ball that drops from the ceiling.
Speaker 1: And Yeah.
Speaker 2: When it falls, the the show Don't.
Speaker 1: You gotta protect that researcher with your life because they're locked in enough to not Yeah. Have ever heard about us.
Speaker 2: That's a high value person. Signal. That's great.
Speaker 10: And I do think it's it's fun because you have the cerebral from Anderson Cooper and you kinda have the dash of drama from from the Bravo.
Speaker 2: Anyways,
Speaker 10: just love to hear. Three months ago, I started Adaption with Sadeep, who's my cofounder. So I'm
Speaker 2: Overnight success.
Speaker 10: Oh, thank you. That's lovely. Yeah. We just closed our $50,000,000.
Speaker 2: There we go. Ready
Speaker 1: Anyways, continue. Continue.
Speaker 10: That was a lovely deduction. I thought that was very bombastic. But probably more important was and probably the most important question I'll work on. So most of my career has been in Frontier Labs and building the biggest model that we can, that's very performant. But this is fundamentally bucking that trend. Trend. It's about how do we continue to evolve these models real time? How do we not just build a static model, but, how does a model adapt to incoming data in a really efficient way?
Speaker 2: Mhmm. Learning. Continual learning? Is that the buzzword? Is that a good buzzword? Do you like that phrase?
Speaker 10: Continual learning, I do because it's actually not a new buzzword. So for the first time, we're not introducing a new word, which is foundation models was introduced by Stanford. We often see these models kind of introduced as pretty bump but but continued learning is an old problem. It's just increasingly urgent because it's now within reach in many ways. Like, most of the last decade has been like you throw compute at the training pre training. Yep. But now, we have expressive enough models that it can interact. And that's fundamental to the question here is how do you interact efficiently with the world? And so I don't mind continual learning. I think it serves its purpose. That's good.
Speaker 2: Yeah. How much are you thinking of, like, an entirely new architecture versus what we're seeing with folks building skills and markdown files, compacting chats, larger context windows? There's so many different ways to sort of get continual learning lights these days, and everyone's solving it in different ways. What what do you think the most interesting path is?
Speaker 10: Yeah. That's such a fun question. And I'll tell you why, because there's almost like two crucial questions that continual learning has to solve. The most heavy handed version of continual learning is you basically have to train again. And that is something that's not all interesting, think, to to adaption mainly because it's a very high barrier to entry. And if we want to foresee it adapt and interact, you want it to be real time. I would say, like, the light version you're talking about is probably another least interesting alternative. It's powerful. It's a lever. But it's really that you want to do two things. Like, we want to have control of the stack and be changing, the weights without gradients, which I think you can do in powerful ways, especially if you do it jointly with GPU optimization. But the second thing, and this is interesting, is that you really need to care about the way you place compute. So the way with monolithic models now, I think people intuitively understand we saw the same model at every problem, which is a big waste of compute. Mhmm. And 90% of problems every day that you solve using AI are extremely easy. Mhmm. It's a 10% that matter. That's the long tail. But even there, we spend too much compute because we do these massive rollouts. And we all really have good verification for code and design right now. Why? Because those are people who care enough to give a ton of feedback. And that's where we see it working really well. And what's interesting is that the the component that's most fascinating for us is let's care about interface, and let's create really interesting interaction points that are adaptive to each task. And so when we talk about add adaptivity, it actually includes the interface because the type of feedback you should get to change based upon your problems. And that's how you make it efficient. So kind of on that spectrum of, like, heavy handed you train, but really for us, our constraint is on the other end. We want it to be real time and we wanna use, a a strong set of levers to get there.
Speaker 2: Mhmm. What's oh, sorry.
Speaker 1: Go for it.
Speaker 2: Yeah. Just what's the biggest, sort of consensus take in AI right now that you disagree with?
Speaker 10: Oh, in AI? I mean, I think data centers in space is pretty bonkers.
Speaker 2: Okay. That's a good one.
Speaker 10: I don't know if that's specific to AI, but it just feels topical. So maybe I'll throw that one out. I I do a lot of work on systems as well, so serving really fast is really important for Frontier and Dynamics are just really tough if you do. I think there's two things that are changing that make it hard to do something like data science in space. One is most co located hardware is pretty much for training. Mhmm. I think that's why you care. Otherwise, you can distribute. And so inference compute, which is where everything's moving, you know, when we talk about real time adaptation, a lot of it's inference. Mhmm. You can spread that compute more easily. You can have multiple data centers. Mhmm. So if you care about space, you probably only care about training compute. And I think people underestimate the amount of failures that happen. And you don't wanna get your training job interrupted.
Speaker 2: Yeah. We unpack that more because it seems like if things are moving towards inference and inference does not need to be co located, having inference happen on a single, maybe maybe wafer scale system on a chip in space, that seems like more possible in that case if we do move to inference. France, and maybe the training still does happen in the co located data center, but then the inference happens on the on the space data center. Is that possible or or or is there some logical inconsistency there that I'm missing?
Speaker 10: So you can distribute inference, but Yeah. To be honest, that's pretty easy to do on Earth. Right? Because you have less constraints to be co located. The real shortage of data centers and, you know, where, providers like Google
Speaker 2: Is around training.
Speaker 10: Around training.
Speaker 2: You Okay. Wanna And
Speaker 10: so the truth is it it's you can do inference distribution much more easily, which is less but the real issue, frankly, is that, GPUs still have failure rates. So Yeah. The 2% GPUs that enough just started. It's considered done every year. You don't try and revive them
Speaker 7: Mhmm.
Speaker 10: From the death. And that's really your cost.
Speaker 2: Like Yeah.
Speaker 10: It's how how quickly you can replace those and what it looks like.
Speaker 1: Very interesting. What do you think your first customers will look like?
Speaker 10: Oh, for us, we wanna make, workloads, like, adaptable all the way from data to interface. And so, there's two use cases for adaption that are both pretty powerful. The first one we're focusing on is customization. Mhmm. So right now, like, if you're a developer, you typically have tried friend tuning. You haven't succeeded, because it's too much. You have to bring your data. You have to wait, and then you become a prompt engineer again. And so most rapidly growing companies just have a ton of prompt scaffolding. And for us, like, frankly, if I think about our measure of progress, it's that we eliminate prompt engineering. Because that's really I think intuitively, it's a desire for control, but it's also an acknowledgment models don't work for people.
Speaker 2: Well, congratulations on the massive round and all
Speaker 1: the progress. I'm sure you'll be back on this soon.
Speaker 2: Have a great rest of your day and enjoy sunny San Francisco. What a beautiful
Speaker 1: And we're gonna go watch CNN ball drops from the last like hundred years and just
Speaker 2: Game tape.
Speaker 1: Game tape. For sure. Study up. So thank you for the inspiration.
Speaker 10: You're useful. You'll get the sense
Speaker 9: when you wanna
Speaker 10: play. It's very fun banter. Yep. And I think that's the cross section that you both capture which is kind of fun.