Figma's new Chief Design Officer on AI empowering designers and reacting to Nano Banana Pro
Nov 20, 2025 with Loredana Crisan
Key Points
- Figma's new Chief Design Officer, Loredana Crisan, shipped 20+ features in two months focused on keeping designers in control during AI generation, solving the problem of losing entire assets when trying to fix minor flaws.
- Figma views AI as only valuable if designers can manipulate generations on canvas rather than accept model outputs as final, positioning control as the core difference between creative tool and mere production speedup.
- As generative tools lower barriers to creating products that look polished, genuine craft and user empathy become harder to fake, widening the gap between surface aesthetics and products people actually want to use.
Summary
Lauren, Figma's new Chief Design Officer, joined two months ago after nearly a decade at Meta, where she led Messenger, Instagram DM, and most recently consumer AI on the product side. She took the role because she became convinced that product development as a process is about to change drastically, and that Figma has the opportunity to build the creative environment that lets people move an idea directly into a finished product.
Shipping pace and the core design problem
Figma has shipped over 20 major features in the two months since Lauren joined. The throughline across them is keeping the designer in control. The specific frustration she is solving for: you generate an asset that is 98% right, then lose the entire image trying to fix the one element that isn't. The work is about making that back-and-forth precise rather than destructive.
Figma Make, the product that takes designs and prompts them into working software, sits at the center of this. The goal is to let designers generate, then take those generations back to canvas and manipulate them — not get boxed in by what the model produced.
Reacting to Imagen 4 (referred to in the transcript as "Nano Banana Pro")
Lauren tested the model's ability to maintain visual consistency across a multi-step generation: she took a Figma quilt pattern, generated it as an image, converted it into a sweater, then composited it onto a photo of a colleague — and the model held each square of the quilt exact and did not distort the face. That kind of style and detail fidelity across chained steps is what she looks for in image models, and she says it has not been reliably available until now.
Her evaluation framework for any new generative image model is whether it can take an initial scene or style and dependably continue it — telling the second part of the story without drifting. An image that doesn't do that, she argues, is not useful as a creative tool.
Designer sentiment on AI
Designer skepticism toward AI and designer enthusiasm for it are both legitimate responses to the same technology, depending on which axis you look at. If AI is primarily a speed and mass-production tool, it is anti-design. If it becomes something designers can actually control and use as a medium for exploration, the canvas widens. The opinion split in the design community maps almost exactly onto which version of AI people are reacting to — what it produced yesterday versus what it could enable.
Figma's internal position is that good enough is not good enough. If AI just produces the same software faster and at scale, that is a loss, not a gain.
The vibe-coded product problem
The sharpest observation in the conversation is about what happens as generative tools lower the floor on product creation. You can now prompt your way to something that looks like Linear. You cannot prompt your way to something that feels like using Linear. The human element — empathy with the user, dogfooding, iterating on real feedback — is what creates product loyalty, and that hasn't gotten easier. It has arguably gotten harder to distinguish, which means the gap between surface aesthetics and genuine craft will become more legible to users over time, not less.