Cinder raises $41M to help platforms fight AI-powered abuse using the team's ex-Meta threat intelligence expertise
May 12, 2026 · Full transcript · This transcript is auto-generated and may contain errors.
Featuring Glen Wise
Speaker 1: Up next, we have Glenn Wise from Cinder. He's the co founder and CEO. Good nominative determinism. We got a wise man in the waiting room. And here he is. How are doing, Glenn?
Speaker 8: What's going on, guys? Good. Thanks
Speaker 1: to for the show. Thank you so much for stopping by on such a big day. Introduce yourself. Introduce the company. Tell us the news.
Speaker 8: Yeah. I'm Glenn. I'm the founder and CEO of Cinder. Mhmm. We just raised 41,000,000. Woah.
Speaker 2: Woah. Woah.
Speaker 1: Woah. Woah. Woah. Guy. Woah. Good news.
Speaker 2: Right to it.
Speaker 1: Yeah. Radical Ventures. We're I got a buddy Rob Taves over there.
Speaker 8: Oh, yeah. They're great guys over there. Yeah. Amazing. Really great team. But our our focus is helping companies stop all of the AI powered abuse that's happening across the Internet today.
Speaker 1: Okay. Yeah. What does that mean? Because there's, like, cyberbullying and, like, mean comments on YouTube videos or livestreams, and then there's, like, spam and hacking and cyber attacks and all sorts of crazy stuff.
Speaker 8: Yeah. And honestly, it means the entire gambit of threats. Okay. And that's really, I think, like, what gets missed in this conversation, which is the fact that, like, there's small. Right? There's small incidents such as bullying. There's large incidents such as state sponsored espionage.
Speaker 4: Sure.
Speaker 8: But companies need to respond to all of
Speaker 1: these,
Speaker 8: and we've never before seen the scale of threats that we have today. Yeah. And this was a problem before, Gen AI, but obviously, AI has made this exponentially worse.
Speaker 1: Sure. So it says customers include Open AI, Spotify, Depop, Black Forest Labs. I'd love to know about how much of this is happening, like, internal to a particular product, like, someone is effectively doing Internet graffiti on, the Spotify comments. We get comments on our Spotify sometimes. They're usually pretty positive, but I could imagine a state sponsored actor wanting to take us down and writing a bunch of mean comments versus and that would be something where you would go to Spotify and say, can solve your problem internal to your product versus there's a there's a misinformation campaign that's happening on X or Instagram about Spotify, and you're notifying them. Like, what's the trade off there?
Speaker 8: Yeah. That's a really good distinction. So we sell directly to the customers themselves.
Speaker 1: Okay.
Speaker 8: So they can combat if there's some horribly racist comment
Speaker 1: Sure.
Speaker 8: Right, that shouldn't be on there. Okay. And so they can combat taking that off. And so how the platform actually works is that our customers set what policies they care about. Mhmm. So they say, hey. We really care about some of the really big ones now, AI generated NCII is like AI generated deepfake porn. Right? That's a huge one. Yeah. A huge issue that a bunch of people see. Obviously, anything child safety related, anything any egregious hate speech and and things like that. They are able to set these policies on our platform, and then we use AI to detect and mitigate it at whatever scale the platform is operating at.
Speaker 1: Yeah. How what has it been like ramping up to some of these bigger customers? I imagine that if you get the fire hose of, like, Spotify content, that's a lot. Was that a challenge? Like, how did you solve that? Obviously, there's a lot of off the off the shelf tools, but, like, how big is the company at this point? How are you able to take on those clients?
Speaker 8: Yeah. I mean, we were really lucky in the sense that the whole team the whole founding team came from Meta. And so, like, we saw
Speaker 2: Saw things too.
Speaker 8: Yeah. We and prior to that, we were at the US government.
Speaker 1: Okay.
Speaker 8: So we've seen we've seen kind of what harm at scale looks like. Yeah. And and obviously, that's, kind of like an infrastructure challenge that we deal with. Inevitably, it is, one, actually being able to process data as quickly as possible. That's a big one. So how can we make a decision as fast as possible of whether or not something violates your policies? There's a bunch of techniques there as well as just being able to handle that scale. Right? We have some customers that have a really large Gen Z audience that all log in at the same time. So that adds some really great kind of distributed computing challenges that that the team is working on. Yeah. No. That's all part of the fun of of building this.
Speaker 2: Is there is there something about this problem that makes companies want to outsource this function? Because I'm assuming at Meta, if you were working on this problem at Meta, you were working on internal tooling and platforms at a certain scale. Maybe they end up wanting to do this themself, but is there, themselves, but is there a certain part of this problem that makes it particularly suited for, you know, having a partner like Cinder?
Speaker 8: Yeah. I think there's a few. I think primarily it's taking the human expertise and really understanding the policy and understanding how to mitigate that policy. Right? Like, every customer of ours can't be an expert in every single issue that they might face. And so that right there means that they need to bring people on. And, yeah, Facebook, I was on the threat intelligence team there. They have an amazing threat intel capability, where they're also Facebook. So they can spend on building out that threat intel capability, but not everyone can or should have to do that. So that's a big piece. Another one that we've been seeing more and more often actually is the third party credibility of going with a company that's also truly a set of experts. Right? And so you're not grading your homework when you're trying to defend your platform. You have someone else that can bring that expertise in and do that for you.
Speaker 1: Totally. How how are you thinking about, like, the actual model choice or or just scaling? Because I imagine that there's so many of these, like, TOS violations. You know, you just ask, like, is this a the of the TOS violation or threat of violence or something that's know, doesn't conform to any of, the frontier models, and you're gonna get a very accurate result. But that's gonna be slow and probably expensive if you're running it, over every single Spotify comment or every single upload to the platform. And then you can go open source, cheaper models, faster models. You can also try and run models on ASICs or more advanced chips like Cerebras is in the news this week because of the IPO, but there's obviously other providers. But how have you thought about the infrastructure trade offs and as you scale the service?
Speaker 8: The thing that is most important for our customers right? I talked about policy. Yeah. It's really evals and ground truth data.
Speaker 1: Okay.
Speaker 8: And so once our customers have, within our platform, set what does true look like, right, what does an actual violation look like because as you can imagine, these violations are incredibly nuanced.
Speaker 2: Yeah.
Speaker 8: And it really depends on what's the platform, where they're based, how old are their users. Yep. A really classic example, right, is like a gaming company could have two different games. One that is a first person shooter for adults, the other one that's a game for children.
Speaker 1: Sure.
Speaker 8: Obviously, they'll go they're gonna define a threat of violence very differently.
Speaker 1: Oh, sure. Even within the campaign. Right.
Speaker 8: And so you need to be able to set ground truth and really set these evals. So then from there, you run evals on these models. And so and it kind of depends on are you prioritizing are are you prioritizing, you know, costs or latency or accuracy? Yeah. Those are basically the three trade offs that we see. Yeah. And
Speaker 5: but
Speaker 8: you can get really great results now, especially around fine tuning some of these open source models. Yeah. But what's funny is that, obviously, these are like, these models themselves are trained to not be able to produce this content, so you do start hitting limitations with these foundation models. Yeah. And so you can do techniques like model obliteration where you can actually remove guardrails and host them yourself.
Speaker 1: Sure.
Speaker 8: You can do tech or you can do kind of traditional classification, again, depending on what the policy area is.
Speaker 1: Yeah.
Speaker 8: But that's why you need to
Speaker 2: I've heard enough. Give them a billion dollar cluster.
Speaker 1: Well, speaking of cluster, has there been any demand for on prem? We were talking to David BuzzFeed from Roblox about this, and I was sort of saying, he has a younger audience, obviously, on the platform, and there's been a lot of pushback about the communication that happens between adults and children on the platform. And I was saying, like, it feels like, although this is a huge problem that obviously needs a lot of attention, like, the technology is getting better and better to the point where you should be able to screen every single chat message that flows through the Roblox platform in real time with a very, very advanced LLM. But Roblox runs their own infrastructure. Is there an API call? And I could imagine a situation where Roblox wants to run this like deeper in their stack. And is that something that's on your roadmap where you've heard customer pull from, or do you think it will never be an issue?
Speaker 8: Yeah. I would love an intro. I really appreciate
Speaker 1: that. Okay.
Speaker 8: Yeah. I I think for us Yeah. Because of these the proliferation of these foundation models, our customers have gotten really comfortable on utilizing you know, already utilizing OpenAI.
Speaker 1: Because they're already pulling stuff from AWS or GCP or Azure, and they already have, a fabric they can pull things into.
Speaker 8: Exactly. And and we because of our sort of security paranoia background, we've invested a lot in we even have run a full single tenancy architecture that our our customers love because, you know, they have they know all of their data is theirs. And so, you know, customers have gotten more and more comfortable, but I could see the pendulum swinging, especially with open source models. I could see it going the other way where they wanna self host host Cinder, in which case, you know, it's built in a way where, you know, they can.
Speaker 1: Yeah. That'd be really cool. Last question. Get me up to speed on your work with Black Forest Labs because I believe that that's a more unique relationship. They do benchmarking. It seems like they've shared some data, but can you get me up to speed on on your work there?
Speaker 8: Yeah. We've been doing some really exciting red teaming work with these models. You know, I think it's a really important job that these models are able to be tested before they're released, obviously. Right? Because once they're released in the wild, then they are subject to the rest of the world. And so that's a lot of the work that we do as well, right, is allow companies to set up those guardrails, but also allow them to not only test the guardrails, but test the models themselves. And that leads to just dramatically safer outcomes when they release these models out into the world. And I think that we're gonna see standards coming out of The UK, The EU, The US on expectations around red teaming, especially as these models become more and more powerful and the attacks become more sophisticated.
Speaker 1: Yeah. Yeah. We're already seeing with that with the who who was it? It wasn't Meta, but it was Google, Microsoft, and maybe XAI joined OpenAI and Anthropic in, you know, delivering models to the government earlier to test, this could be a logical place to test as well. But congratulations on the progress. Thank you so much for coming
Speaker 2: Fantastic to meet you. Thank you for doing this work. And the chat is concerned by your coworker's posture in the back there in the blue.
Speaker 8: We're gonna get some ergonomic experts
Speaker 2: in here. But Give him give him our best.
Speaker 1: Yeah. Well, I gotta I
Speaker 8: gotta tighten up the shit.
Speaker 2: No. No. It's good. He's locked in. That's actually a high performance possible.
Speaker 1: That is. That is.
Speaker 2: You know,
Speaker 1: it's the window opening in fresh air. C o two levels, probably not
Speaker 4: a problem.
Speaker 2: Doing some critical work.
Speaker 1: Anyway, thank you so much for joining the show.
Speaker 2: Great to meet you, Glenn.
Speaker 8: It's so nice
Speaker 1: to meet you. We'll talk to
Speaker 7: you soon.
Speaker 1: Bunch of updates really quickly. First, the chat has given a name to my next project. It's the diesel the diesel refinery company of Malibu. Get ready, Jordy. We're gonna be pumping diesel fumes all over Malibu. It's gonna be a boom time. Also We got a Swatch.
Speaker 2: Yep.
Speaker 4: Yeah. Pull it up.
Speaker 1: Video. We talked about the Swatch AP collab, the Swatch, Adamar Piguet, royal pop collaboration. Well, the final design or something close to the final design has finally hit the timeline and we can pull up this image because it's not wristwatch. It's more of a pocket watch. It it goes on a necklace, I guess, and it doesn't
Speaker 2: More of a come with a charm. Like, I see people tying this to like
Speaker 1: Yeah. Dad. Think this this tells me that the all of the fears that this would be confused with a stainless steel royal oak were probably misplaced. What what do you think, Tyler?
Speaker 3: Yeah. Well, okay. So that's based off like the swatch pop. Right? Yes. So this is just like I'm talking about like this is a long time ago, made the Swatch Pop, which is like this little Yeah. Is a watch that you can pop in and out Yes. Into a necklace into like
Speaker 9: a little charm
Speaker 3: or onto like an actual bracelet.
Speaker 1: A stainless
Speaker 3: steel thing. Like we haven't seen anyone actually buy this yet. It's just like promo videos. There may actually still be bracelets that you can put it into a watch form.
Speaker 1: That's true.
Speaker 3: But so far everyone is up Platinum. In
Speaker 1: Platinum. Platinum wrist.
Speaker 2: Yeah. I'm trying to find pictures Bracelet you of these pop into. Watch.
Speaker 1: Maybe? Hop. I don't know. Well well, there's a release video that we do think is real. I think we verified this with Quaid over at Bezel. And it's it's showing how they're building it, how they're putting it together, some cinematic music. And they are making them these are mass manufactured. They're showing them being mass manufactured, testing them, but they come in all sorts of colors. And we'll be see we'll be interesting to see what happens. This does not seem like an it's so overt moment for the AP Royal Oak market. It seems like
Speaker 2: for the Bears.
Speaker 1: Yeah. It's so overt for the Bears. Everyone that sold yesterday needs to buy them back potentially.
Speaker 2: Yeah. Interesting. I I I think it's working to the degree that I see that and I'm like, I I kinda want one of those.
Speaker 1: Yeah. It's like a cute I don't know. It's like a it just fits a different
Speaker 2: I have a I have a guess as to what the hypebeast will do, John. You know what they're gonna do? What? They're gonna take those
Speaker 1: What will they
Speaker 2: royal pops. Yes. They're gonna use them to tie their shoes. They're gonna use them as shoelaces. So they'll have like, you know