Interview

Flapping Airplanes raises $180M at $1.5B valuation to train human-level AI with dramatically less data

Jan 28, 2026 with Aidan Smith & Asher Spector

Key Points

  • Flapping Airplanes raises $180M at $1.5B valuation betting that frontier AI models waste compute by requiring half the internet to train, and that biologically-inspired data-efficiency gains could unlock customization for enterprise verticals starved of proprietary data.
  • The 11-person, two-month-old lab is delaying enterprise customers and revenue to focus on foundational research, reasoning that commercial pressure would divert energy from solving the core efficiency problem.
  • Co-founder Smith suspects data-efficient models may not reduce inference costs and could grow larger, citing the human brain as precedent, leaving the final product shape uncertain but the efficiency problem large enough to matter regardless.
Flapping Airplanes raises $180M at $1.5B valuation to train human-level AI with dramatically less data

Summary

Flapping Airplanes, a two-month-old AI research lab with 11 people, raised $180M at a $1.5B valuation to solve what co-founders Aidan Smith and Asher Spector identify as the core bottleneck in AI development: data efficiency. The capital will go primarily toward compute.

Frontier models today require ingesting roughly half the internet to reach human-level reasoning. Smith and Spector argue this is not necessary and that the field's reliance on massive data suggests a fundamental gap in how AI systems are designed. They want to train models to human-level capability while being dramatically more data-efficient, using approaches inspired by biology.

Enterprise leverage

Smith frames data efficiency as a scaling problem for enterprise AI. Current models are thousands of times less efficient than humans at learning new tasks. If Flapping Airplanes achieves a million-fold improvement in data efficiency, customizing AI for specific verticals becomes a million times easier. Spector identifies a real market pain: companies with limited proprietary data are caught between AI vendors demanding more training data and internal teams disappointed with the results they get. There is acute demand from teams trying to solve data-constrained problems in robotics, scientific discovery, and trading.

Research over revenue

Smith is explicit about delaying commercialization. The team is not working with enterprise customers yet, despite the founders' commercial backgrounds. Revenue forces focus on customer delivery, which pulls energy away from foundational research. Instead, the plan is to build in public, releasing research artifacts soon, while staying focused on the efficiency problem before pursuing market fit.

Inference and model size

Smith hedges on whether training-efficient models will reduce inference costs. Using the human brain as a reference point, he suspects they may not be smaller and could actually be larger, since the brain spends more compute per token than current models. He does not claim to know whether the end result will be smaller models, cheaper inference, or local deployment, only that the problem space is large enough to matter regardless.

Spector criticizes current reinforcement learning practice as inefficient and non-generalizable. Most RL today trains one task at a time, then teaches the next in isolation. The field is looping back to old deep learning logic instead of building systems that learn across tasks.

Talent

Smith notes they avoid the war larger labs fight to hire experienced trillion-parameter model trainers. Instead, they are hiring both experienced people and emerging talent, betting that curiosity and problem-solving ability matter more than scale experience.