Flapping Airplanes raises $180M at $1.5B valuation to train human-level AI with dramatically less data

Jan 28, 2026 · Full transcript · This transcript is auto-generated and may contain errors.

Featuring Aidan Smith & Asher Spector

available.

Here we go.

Who who do we have? We have Asher.

There we go. and Aiden from Flapping Airplane. Amazing in the ream waiting room. Let's bring them in to the TV pin Ultra. [music]

Hello.

Welcome.

Welcome back, Aiden. Uh good to see you. How are you guys doing?

Hey guys, we are doing awesome. It's good to be here.

Fantastic. Welcome back. Uh

big big day.

Big day. Please take us through moment while you guys are do the

tell us about the name first. Let's all do the flap for flapping airplanes. What a name.

Well, so you know we are we're a new AI lab. We're focused on the data efficiency problem. We're trying to train models that can be roughly as intelligent as humans without ingesting half the internet.

Okay?

You know, in order to do that, we want to think at least a little bit in a way that's a little bit biologically inspired. We don't want to build a bird, right? You know, obviously airplanes are fantastic,

but uh to maybe help them flap their wings is is the right metaphor for what we're doing.

Well, you're exactly right.

That's hilarious. Where does the milk fit in?

Uh I'm not here to comment on the milk. It's a big part of our third culture, but I I can't say more than that. I I love it. It's the most probably the most underrated drink out there.

Not enough people are taking advantage of of all the benefits,

you know. We got so many milk ads throughout my whole life. The got milk campaign was fantastic and then it sort of just disappeared. They haven't been running ads.

I went Yeah, I went through a phase where it was like borderline a gallon a day.

Go mad. Yeah, there's a phrase for it. Anyway, [laughter]

any I will be buying dairy securities. I got a lot of milk

coming to the [laughter]

unlimited milk new new perk.

That's fantastic. Okay, so yeah, take us through like the status of the company like uh how far along are you? How big's the team? Uh what what what are you thinking in terms of like release time? Are you are you planning on sort of going heads down age of research mode, SSI mode and and releasing something when you've hit some some major milestone or do you want to be more iterative with it?

So, okay. So, we're we're about two months old. The team is now 11. We're super excited. We've got people we really love who are who are both brilliant, but also just wonderful people. We're really excited about it.

Great.

Um I think I think we are sort of in the middle. We're definitely age of research mode. Like I think the goal is not to commercialize. Not because we're not commercial people. Like my background, even Aiden's background, Ben's background, all all reasonably commercial in some sense as oo in addition to deep research.

Sure.

Um it's just that when you like when you get revenue, you have to focus on it. you have to focus on providing quality for customers and that makes it harder to build, you know, deep technology.

Yeah.

So, you know, our our goal is to try to to find the biggest market we can to solve the most important problem we think we can solve, which is the data efficiency problem before doing anything of that.

At the same time, like our approach is probably to be a build building a little bit more in public. We'll release some research artifacts at least that I think will be will be cool reasonably soon, but you know, who knows exactly when the runs will finish or or how how many times they'll crash before they work. our biggest training run today. So, you know, bad timing next launch for our uh maintenance. But

here we go. Uh [laughter] walk me through why data efficiency and why the data efficiency a problem is important. Uh it sounds all good. Oh, you get a really smart model and you don't have to ingest the whole internet, but everyone can just ingest the whole internet. Like you can download it, you can scrape it there. You know, there's plenty of models that have been trained on it. So, break it down.

So, exactly. I I think the goal is not necessarily in the long term to not train on the entire internet. I mean, I'm it's research. I don't exactly know. I think the idea is that like this is not needed, right? And and the fact that it's not needed suggests that we're actually missing something because currently for the for the existing technology that we have it, it is necessary.

Yeah.

So, why do I think it's an important problem? you know, to the extent that AI has been hard to integrate into the economy and you know, we always see, you know, these Bloomberg articles that are like, you know, oh, like chat and search are working and coding is working, but like what else is AI really doing for me? To the extent that's true, I really think it's because models are much less data efficient than humans. But like if you wanted to learn a new task or put in a new vertical, it like it takes thousands of times more effort than it does to just tell a human what to do.

So I think if you can make a model a million times more data efficient, it's like a million times easier to put into the economy. I also think there's just like tons of cool stuff that you can do in really data constraint regimes if you if you can learn to learn with less data. For example, whether it's robotics or scientific discovery or even something like trading which we have to acknowledge is like the most valuable next token prediction problem in the world from a pure economic perspective.

Um these problems have very limited data and and you know existing AI systems aren't quite as good as them as as they are at other things. I I think that you know learning to learn with less data is just tremendously valuable in all these things. Talk to me about like fragmentation and steerability. If you achieve sample efficiency like you're planning to, do you envision a world where you're sort of creating some sort of base model that is so sample efficient that just with like a basic prompt or a few examples, it becomes incredibly effective at a specific task and and sort of replaces like the heavy duty training data runs and the RL environments and these massive uh data collection uh efforts to do some sort of fine grain task. ask at a very very high level to sort of like 59s of efficiency. Uh is that what it looks like or is it or is it more like you will wind up vending a sample in efficient model that's for specific use cases?

Yeah. So one way we think about this is that it seems like the reinforcement paradigms of today are actually just shockingly inefficient. You don't really get much generalization across tasks. You you teach a model to do one kind of of learning and then you teach to the next one. It's kind of like whack-a-ole or something. And we look at this and we think this is kind of crazy. I mean the first time I really saw RL scale what it brought me back to was actually good oldfashioned AI back in the dawn you know this primordial age of AI when people were kind of handd designing these convolutional filters for eyes and noses and things.

Yeah.

And then then we're like wait just throw data at it just throw scale at it. What are you doing? And in this really weird way we're kind of looping back onto that where it's like oh just make another environment bro just make another environment.

Just one [laughter] more just one more.

The next AI will not be just you know environment slop. I mean I think I think I think it is a piece of it like I do think that there's a long tale of tasks in the world right and like there will always be a place for people to produce custom data it's just like a question of how much operational difficulty it takes and like one route is just to slog through the operational difficulty constantly you incur variable cost another route is to do a bunch of fixed cost investment into trying to make that variable cost lower so the last thing I'll just say to answer your question I don't really think we know what the end goal looks like entirely like you know AI is a big space like you know human level intelligence is not the ceiling it merely the floor on what is possible. Like if you can train models with vastly less data and and possibly more compute in very different ways, like what is going to happen? Like we actually don't know. Like I think I think it's it's actually unlikely that they will uniformly dominate frontier models even in the very best case scenario for us. They're like going to know less facts. They're going to be different. They're not going to have memorized all Harry Potter. Like that's actually a useful skill in some ways.

But I do think they'll be different and weird and like they'll have have interesting capabilities that that we'll find, you know, really valuable ways to use. So, so yeah, I think it's an experiment we're really excited to run.

What?

Yeah. What do you expect the economics to look like in a more sample in efficient environment? I it feels like obviously you're not paying for a bunch of reinforcement learning environments and a bunch of data, but it also feels like the the the training cost might be lower. Is that the correct assumption? And then what does inference look like? Does that does it get cheaper? Is it a smaller model? Can it run locally? Are there any other uh downstream economic impacts that you think might come from a new architecture?

I mean, there exists this whole smoresborg of companies right now that are basically doing reinforcement learning for you. Yeah.

Or are taking your big pile of corporate data and making it useful in your model. Yep.

And this is great, but it's actually a huge pain. And both sides deal are upset. You know, the companies that are doing it are like, "Please give us more data." And the and the their clients go to them and like say, "This is all we have. Like, how can we possibly give you more here and these results are not meeting what we want?" And the demand for better data efficiency a year is just huge. And we really believe that even winning here will have massive effects on economy.

I also I don't I don't claim to know exactly what the economics are going to look like. You know, for example, is this going to be accomplished with models that are smaller or bigger? Like I don't know. I actually I I slightly suspect bigger. Like I'm not sure it's actually going to make inference costs easier.

Um I mean, if you look at the brain as a comparison, you know, the brain spends a lot of flops per token relative to what a current model does. Um so, so you know, we don't have to use the brain as a model, but it's like an interesting data point. So I I don't I think it's a little bit for for us to say anything definitive about economics, but I I'm certainly intellectually interested in these questions. I haven't been thinking about them for a while.

How are you uh approach like are you guys going to be spending all your time locked into the office just doing your own research or is this or do you have opportunity to go out in the world, find companies and organizations that have been disappointed by existing solutions and say like how do we kind of reapproach this problem? Right? because you're saying like maybe it's not just more data.

I think we're going to start with research. I I think the main thing is like when we work with companies, we want to make sure that we are like fully devoted to actually providing genuine value for them like and and not just using them as like a stepstone to to do interesting research. So I think I think for now until we feel like we have some technological edge, we'll be very very focused on research. I do think over time to your point though like it is helpful to research to actually be able to go out into the world and see what the problems people are facing are and and we actually do plan to do that but just not like in January.

Yeah. Uh what talk about uh fundraising uses of funds uh GPU poor GPU rich like what you know the the AI talent wars like what what what are the constraints on on your progress here?

Well so I mean first I I just want to say we're really grateful to our partners. I mean, they've shown a lot of trust and faith in us, and we're going to do everything in our power to reward that faith. So, it's it's it's humbling. It's really exciting. We're we're really grateful to work with them.

Thank for the milk.

Yeah. [laughter] It was a no.

Um uh so on what the computer is for, I mean, the good thing about doing foundational research is like the stuff we're doing is often so weird that like it doesn't need to be done at gigantic scale first because with very high probability, it's going to fail at smaller scale. And like if it starts to work at smaller scale, like if it's really working, if it works so well that you know you should you should see pretty strong signs of success. So you actually you need much less compute to like get a 10x win at least to get it off the ground than you need to get like a 10% win.

Mhm.

So you know that that's good. I mean the the raise is is primarily for compute is is to answer your question.

Fantastic. um on on the talent wars, you know, I think the way we think about it is and this is very much informed by my experience as a member of of this organization prod and my brother's experience as a co-founder. We're really not completely caught up in the same talent war as everyone else. We're trying to find the next generation. We're also hiring experienced people and like I think that has resonated with experienced people, but like you know we're trying to reimagine new ways of doing things. Like I like don't really think that having trained a trillion parameter model before is like the thing that's going to make someone successful at that. Like obviously being a good engineer, being smart, being excited and curious about the problem. These are things that are tremendously valuable. And like lots of people, you know, have that, but I I don't I don't know if like, you know, five years of experience uh is is really the thing that's most important.

I love it. Uh well, congratulations. 100 million $180 million raise, $ 1.5 billion valuation. I want to ring the gongs for you.

Do it. Uh super excited for you guys and the whole team. I'm sure you'll be back on uh very soon and enjoy the research.

Yeah, have a great rest of your day.

We'll talk to you soon.

Great to see you.

Goodbye.

Up next, we have Alex Dylan from outtake.ai. He's in the reream waiting room and we'll bring him in.

Let's do it.

The TVPN Ultrame.

Made it. Alex, what's going on? How we