Fal raises $140M Series D led by Sequoia as image editing surpasses AI video in revenue

Dec 9, 2025 · Full transcript · This transcript is auto-generated and may contain errors.

Featuring Gorkem Yurtseven

He's keeping What do you need to do? I don't even

I'm glad he's keeping our all of our data

like Yeah. Yeah. It seems it seems it seems extremely valuable.

He's he's that guy.

Well, we have Gork from Fall in the Ready Room

uh with some massive news. He's back on the show. You know him. You love him. He makes all the things possible.

Every time you're on, I'm like I He's going to be back on.

He's going to be back on soon.

You guys Merry Christmas. Merry Christmas.

Thank you so much. Good to be back in in the temple of technology. Capital of capital. There was one more. The fortress of finance.

Fortress of finance. [clears throat] There you go. [laughter]

Thank you so much for coming on the show. Uh give us the news. What happened?

Yeah, today we announced our $140 million [music] series D. Seoat and meaningful participation from Kleiner Perkins and Nvidia. It's the SEO capital firms. [laughter]

We had an incredible year this year. We have some of the biggest advertisers, retail platforms, design and productivity apps and movie studios generating images and videos on the platform and we we

8xed our revenue in the whole year. So it was

this is our third fund raise of the year. We we announced 5B earlier this year in the beginning and then

you got one more. You're going for the four Pete won a quarter. Come on. Come on. We got We got two weeks. [laughter]

You got two weeks.

Couple remarkable couple weeks. What's been uh what's been the biggest driver of of revenue growth? Has it been uptime uh you know being a one-stop shop like like what what messaging has really resonated the the most and driven the most growth for you this year?

Yeah. So beginning of this year, we thought we were going to ride the AI video wave for the whole year.

That that was that was accurate. It was it was an incredible year for AI video. But what was surprising to us is actually rise of image editing. Um like image models existed a couple years ago like that's how we started our business. But image editing, one of the first good models, uh, came out around May and now it's a bigger part of our business than than AI video. That was surprising to us. So we thought we were going to ride a big wave but another even bigger wave you know collided with it and are used

is that is that surprising because

I mean when you think if if it's very it's very common place now to like one-shot a great image output

and it's so much harder to oneot or even get a great video output. You can get a decent video output. So I feel like it makes sense. Everybody's making images all the time. It's like uh it has it's it's like when you think about uh in the workplace there's just so many different use cases even even you know for consumers. So I think it makes sense and it really is just exciting because that's uh the image models feel like they're so good now. Everybody's so used to seeing it and getting kind of like faked out by an image now because you can't tell the difference. video, you can still usually tell, but it feels like maybe in a year from now, it'll be the same

situation where it's like, I just don't know what's real. I don't know what's what's fake. It's it's all confusing.

100%. Video video models are still very hard to work with still. Uh some really talented AI creative people create great content with it, but it hasn't really reached mainstream. I would say it it takes even more effort to create an ad with a video if it's like high quality than actually shooting it yourself. But what happened with image models is going to happen with video and the user experience of video models are going to going to get much much better and you'll have even even more mainstream adoption of of these models. I'm pretty sure of this. And that's what makes me really excited because we we went so far. It's it's a lot of image models, a lot of image editing, just just a glimpse of video models and this this is all going to get better and better and it's going to reach complete mainstream. All the studios, all the retailers are going to make use of this technology.

How Oh, sorry.

Yeah. Just uh what what are you seeing like how are you thinking about 2026 uh from a model model progress standpoint? uh you know, Nano Banana was obviously, you know, massive uh leap. I'm expecting we'll see uh even more activity in 2026, but I'm sure you have uh somewhat of a preview just given given uh your guys' position in the market.

Yeah, exactly. So, usually one of the Frontier Labs pushes the boundaries. In this case, it was Nana Banana twice this year. It was like their first release and now this very recent Nanovana Pro release but very quickly e either the open source community or or labs from China people catch up. Uh there are a couple reasons for it to be honest. Once once they see there's demand for it once people know exactly what to build for it's usually easier for them to get motiv motivated and catch up. but also some of the tricks, some of the the research tricks leak and people use them and and train these models. The the model market for generative media is a lot more fragmented than LLMs. There's so much choice. There's so many models that are different than that are better at different things that the the best model keeps changing. Even Nana Banana was dethroned couple times throughout the year. Right now it's considered the best image image editing model, but I don't know how long that's going to last.

What are you uh excited for um in terms of capabilities for next year? Do you have do you have specific like benchmarks that you're tracking or even like functionality like um I've noticed I have one that's the where's Waldo test where I will ask you to generate a full where's Waldo and even Nano Banana Pro still can't quite do it. It'll either put Waldo right at the center or it'll make two Waldos. And th those images in the ch children's book are really complicated. It's not just a portrait and there's maybe not enough training data on the internet. But that's one test that I've been tracking. Do you have your own internal benchmark or you know that whatever you think is like next the thing it can't do right now?

Yeah, I have some of my favorite prompts that I try with every single model. But I rely on our team. We have great creative people in the team. They spend all their living hours working on these models. They are the usually ones coming up with very creative ways to utilize. One thing that changed with nonabanana is you can actually feel the model has more world knowledge than than other models and that opens up other door. I don't know have you seen people are now adding some web search and creating like newspaper articles on the fly things like that that is that are very creative uses of these models. Uh, and on the on the video side, people really want more controllability. They want character consistency. They want scene to scene consistency. And some of the releases that happened recently, [clears throat] Cling, for example, is is one of the better video models out there. They've uh announced some editing capabilities which are pretty remarkable. And all of that is going to get easier to use. And once people have more consistency with the scenes they're creating with with video models, I think that's going to make a big difference.

Yeah, I always see those videos of uh it's a little yachty walk out. Uh that is character swapping and it's always really obvious what's going on. It's rough around the edges, but it's still hilarious because it's a great meme. And I definitely would predict 2026 is the year that that that little yachty walkout just looks 100% real. next fund raise announcement.

I can't wait. I can't wait. Yeah. Yeah. Yeah. Yeah. We need to do that for you. We need to get get your body scanned a little bit.

Uh uh you've never asked this question, but you might know the answer. Do you know why it's called Nano Banana Pro?

Where do you have any idea what the origin is of the name?

So, they had a code name Nano Banana. Usually when they put these models into the uh benchmarking websites, they usually add a code name like they were Blueberry was one one code name, Red Panda was another and this was Nana Banana and people recognize the model like that and they kept it. That's the story I know. I don't know if there's another there's another version of the story.

But I I guess the other question is uh is obviously it is a code name but where' the code name come from? Uh Google has a long history of using fruit, although so does uh OpenAI with strawberry, but uh all the Android iterations were like ice creams and desserts for a long time. Um but the question is is Yeah. Is is is what's the nano doing there because I want the full banana. [laughter]

Give me the Don't give me the nano banana. Give me the give me the gigab banana. I want the gigab [laughter] banana. The biggest exa banana. The mega banana. I want the biggest banana model. Don't give me the shrunken down nano one. I want the I want the gigab banana version.

Yeah, I think it's using the smaller Gemini model as the as the reasoning as the reason. That's why it's nano, but no [laughter] mass but it's funny. It's funny to have fun with. Anyway, thank you so much for coming on the show. Congratulations on the massive news and thank you for supporting us all year long.

And you got to if you 8x again next year, you got to 8x the number of fundraises, too. that put you at like two fundraisers two fundraisers a month. Two fundraisers a month. 24 for 2026. We'll see.

That'd be great.

Uh but uh congratulations to the to the whole team and uh we're very excited for you guys.

Have a good one.