Krea raises Series B with 20 million users — co-founder Víctor Perez on AI image generation adoption and the OpenAI image model breakthrough
Apr 8, 2025 · Full transcript · This transcript is auto-generated and may contain errors.
Featuring Víctor Perez
Great. Great to meet you guys. Fantastic. Is uh is uh is the is the office going to change with this new fund raise? I got to ask. Are you're uh you guys going to stay posted in the living room? Do we have you, Victor? I think we might have lost you. The living room seems like we're having some technical issues. Okay.
Well, we we can hear see and hear you now. Yeah. Okay. Let me go down far away from the Wi-Fi router. I mean, that is the issue with working at home. Complicated Wi-Fi. You need the enterprise solution soon. Now that you have the big series B done, be sold very soon. Yeah. Oh [ __ ] they are doing another meeting.
So, I'm going to steal Viego's room. This is great. I love Hey, we're getting the whole tour. By the way, you guys, you guys want to see the office? Yeah, please. Let's do a tour. Anything that you can show us, you can turn around. Okay, there you go. No API keys. Hopefully, music going on. That's That's fantastic.
Real life. Wow. What's up, guys? How you doing? Congratulations on the milestone. Looking great. Wow. You guys said you guys were not kidding about the living room, but you've really built it out. I love it. That's amazing. It's looking It's looking good. That's good. Uh uh nice. Uh why don't you introduce yourself?
Uh John John was gonna ask you that and then I cut him off. No, you're all good. Yeah. Sorry. Sorry about that. No worries. Oh, here we go. So, my my background my background my yeah like I guess on my background like the TLD artist is I growing up I was very interested into creative things of all kinds.
I mainly had a a music band and I was doing from playing multiple instruments in the little studio that I created in my house to producing music, mixing, mastering, like learning about all of these uh processes around music production.
But through my music band, I also got super interested on doing photography and like doing different kinds of content for that music band. So that way I I explore like many different things from graphic design, 3D graffiti art. I I also had like a big passion for that and and at some point I was that was in 2015.
I was in I was just like I just finished high school and I didn't I was not sure about what to do after that and I had like two options in in front of me.
One one of it was go and do classical guitar studies at the conservatory of Barcelona at the conservatory of guitar of Barcelona and the other one was doing something around computer science or physics.
I really like math and I guess that what I like about math is is kind of like the challenges that it poses and like the interesting Yeah. Like I guess I like I I love challenges and math put like a lot of challenges in front of me.
But in the end I found like this middle ground on this degree that it was called audiovisisual systems engineering. It was kind of like this degree where they showed you how a microphone works, how MP3 encodes audio, how MP4 encodes uh video, etc. And that's where I met Diego, my co-founder.
That was like 10 10 years ago. He ended up in that same degree following kind of a similar story. In his case, he came from having a lot of interest in film and a lot of interest in in 3D as well, but also he also loved programming and he also loved engineering. So we both ended up like in in that degree.
And on the second or third year I got introduced into I mean first of all I loved coding like right right after after getting into the degree I loved coding um found it like extremely creative later on I found about AI um I was mind blown by deep learning like just like the fact that you can have these neural networks learning by themselves from data and being able to do such complex complex tasks was very interesting to me and when I discovered about GANs that they were like very early models for image generation.
That's when I fell super deep into the rabbit hole and I ended up like reading a lot of papers doing a ton of implementations by my own from all these papers that they were out there back when everything was open source. Uh and and ended up the good old days the good old days open source days.
Uh do you have a first question or No, go for it. Uh I I I 20 million users absolutely massive congratulations. Uh where are you seeing those folks come from?
Is it consumers just having fun proumers who are maybe doing little contracting work monetizing their creativity on social networks or are you already in the enterprise or all three? All three. I think that uh up until recently there were like two very well-defined blogs of users. One of them it was the consumer type.
It was people who this technology gave them a zero to one when it comes to creative freedom or to like uh enabling them to create.
It's people that didn't necessarily come from a creative background but they had a lot of joy out of expressing their creative ideas using this technology and they were they were paying for the subscription almost in the same way that you could pay for a video game or that you could pay for a camera. Mh.
Um then we had the professional and and the professional it was that user that did have a creative background and that he was using our technology when I mean our platform to speed up some of their processes. Mhm.
Uh these speed ups like variety depending on the on the uh industry like you you would see architecture studios coming to Korea with very low resolution renders and using our enhancer to get these renders up to 4K resolutions with very crisp textures.
or you would see game designers coming to our real-time tool, putting a bunch of ideas around characters and being able to create prototypes for some characters that they were designing. Um, so can you talk about just general adoption?
So during the during the sort of like Studio Gibli moment, it's still, you know, top of mind. Uh, we saw a lot of people that still weren't aware, they had no idea how these images were being created. John and I think that some people thought it was like Snapchat filters or something like that.
Um can you talk about just like consumer sort of awareness and adoption um broadly? Uh you know are are you guys still finding people every single day that are just sort of like completely new to this sort of new image generation models or or you know how what what do you think the broad consumer awareness is today?
I mean, I I was I just came two days ago from a a short trip to New York, and I feel like that that trip to New York made me realize how deep in the bubble we are here in SF. Like I think that I take for granted that people know that nowadays you can generate images with artificial intelligence and that's not the case.
Like I think that we haven't we haven't reached I wouldn't even I wouldn't even know what's like the percentage of of like reach that we have had right now but it's definitely very very small like this technology is still nason people like us are trying to make it intuitive and usable for really anybody to just like grab a phone type a URL and be able to create an image uh very easy but I think that people still need to know that this is even a possibility like I think that that they they just don't even think that some of the problems that they have when it comes to marketing or when it comes to doing product design can be solved today by artificial intelligence.
So I don't know if I'm the best one like if I'm the best person to have like a good sense of what is the current adoption because of how deep we are in the SF bubble. Yeah. from my experience that we that I've had in New York.
I don't know like I had this fun story that I was on an Uber and the Uber like uh she she just like saw that I was like talking on the phone in Spanish and she was from uh Puerto Rico.
So she started like uh talking with me and asking asking me what I was from and what I was doing and this woman she had like um like she was like selling a sort of like beauty products on Instagram. Mhm.
and and I saw it and she started asking me, "Oh, so can I use your tool for doing like this product photography or can I use like all of these things? " And as she was talking, I was like, "Yes, you can do it, but you need to go through a process.
It's not like some magic thing that you like go there and and like the AI does everything for you. You need to go and train a model with your product. After the model is trained, you go to the image generator. There you create like all the assets that you want.
And after you have this workflow in mind, after you have like this workflow in place, you can generate as many assets as you want and like your workflow is going to be extremely optimized. Yeah. How do you how do you think about prompt engineering long term?
Is it is it you know I I remember like a year and a half ago, maybe a year ago, everybody said every company like prompt engineer is going to be this new role. And now it feels like it's getting easy enough to prompt a lot of these tools.
I'm sure like Korea that maybe it's just not necessarily maybe it's a skill set but not necessarily a job but I'm curious how you think um do you think that prompt prompt engineering will still matter in five or 10 years or it'll just be super intuitive?
Um I mean pro engineering at the end of the day it's it's it's it's just like being able to communicate your ideas in a clear way with like this technology you know like we have like this AI model that can understand language and that can do things and proming is just like the way that you tell this knowledge that we have encapsulated how to do things or what exactly to do.
So at the end of the day it's just managing and I do think that this feels like a new way of doing software and I do feel like this is going to like in the future most most software that we see out there has been created by a very big percentage through pro engineering through steering AI models towards whatever you want to accomplish.
And I see this on the on the visual space like I see I see us building Korea in the future more and more through instructions. I see our users working with our platform more and more through instructions rather than through just like like ting typing up Roman and just like getting an image.
I think that this new model from OpenAI kind of shows that. Yes. Speaking of the new OpenAI model, uh it seems like they've evolved the actual underlying algorithm. It's not purely diffusionbased. Are there new buzzwords or keywords that uh have you reverse engineered any of how they're doing that?
Uh because it seems like there's a number of steps like they're actually transforming the prompt. There's some reasoning in there. The image loads top to bottom which we haven't seen before. Midjourney kind of diffuses everything from blurry to crisp. just the whole image at a time.
It seems like they're doing some sort of blocks or line by line rendering. Uh what can you tell us about how that system actually works? I don't have a I mean I have some intuitions but I'm not like super I don't have high confidence on how it works.
It seems like there's some outgressiveness going on and we have already seen similar things with croc image generation.
But I feel like to me what it's really gamechanging about this new image model is the it's like very similar to what we were like uh talking about before like this this is an image model that is able to reason and it's able to understand instructions like it's able to understand here's like the picture of my dog turn it into studio gibli.
Yeah. And and like this this is like something new. This is like something that previous diffusion models were not good at.
Like diffusion models are good at you have like a text and you can generate an image that kind of represents that text, but it's very hard to have them reason and to have think about like what you want to do and what is the instruction uh that the user wants and how to accomplish that goal. Yeah. Yeah.
It seemed like there was like how it was like everything that style transfer should have been plus the the latest and greatest in diffusion models. Like they really like packaged that up very well. And so I think that's why it broke through. Uh but anyway, congratulations on the round.
Thanks for thanks so much for stopping by. Uh and thanks for the office tour unexpectedly. That was that was really fun. Uh but uh we'll let you get back to work. I'm so much to do to the whole team. And uh and we'll talk to you soon. Sounds great. Thank you so much for having me. Thanks a lot. Talk to you soon. Bye.
Very interesting. Nice. Uh we got Leaf coming on from Public. I'm curious if he's been sleeping at all. It's a wild time in the market. He has some interesting data on what's happening on public because that's where people go to trade uh multiasset investing, industryleading yields. They're trusted by millions folks.
You've heard us do the ad reads before, but now we have Leaf in the studio breaking it down for us. And we will bring him in right now. How you doing, Leaf? Welcome to the stream. Boom. What's going on? Great to finally have you. Nice. Took some. It's