Cinder raises $41M to help platforms fight AI-powered abuse using the team's ex-Meta threat intelligence expertise
May 12, 2026 with Glen Wise
Key Points
- Cinder raises $41 million led by Radical Ventures to scale its AI-powered abuse detection platform, which helps internet platforms like OpenAI and Spotify identify deepfakes, CSAM, and state-sponsored threats at scale.
- The company uses fine-tuned open-source models and model obliteration techniques to detect policy violations across platforms with different content rules, solving a distributed computing problem around latency and cost.
- Cinder does pre-release red teaming for AI model companies and expects formal regulatory standards around guardrail testing to emerge from the UK, EU, and US as models grow more capable.
Summary
Read full transcript →Cinder, a trust and safety platform founded by former Meta threat intelligence engineers, has raised $41 million led by Radical Ventures to expand its AI-powered abuse detection business.
The company sells directly to internet platforms, helping them detect and remove AI-generated content that violates their policies — deepfake pornography, AI-generated child sexual abuse material, hate speech, and state-sponsored threats among them. Customers include OpenAI, Spotify, Depop, and Black Forest Labs.
“We just raised $41,000,000. Our focus is helping companies stop all of the AI powered abuse that's happening across the Internet today. We've never before seen the scale of threats that we have today. AI has made this exponentially worse. Customers include OpenAI, Spotify, Depop, Black Forest Labs.”
How it works
Platforms define their own policies inside Cinder's system, and the platform uses AI to detect and act on violations at scale. The nuance matters: a gaming company running both an adult first-person shooter and a children's title will define "threat of violence" differently across those two products, and Cinder's model is designed to accommodate that. Glen Wise describes the core infrastructure challenge as producing decisions as fast as possible — handling concurrent spikes from large Gen Z user bases is a real distributed computing problem — while balancing cost, latency, and accuracy across model choices.
On model selection, Wise says fine-tuned open-source models can produce strong results, but foundation models trained to refuse harmful content generation hit limits when asked to evaluate that same content. Cinder uses techniques including model obliteration — removing guardrails from open-source models and self-hosting them — alongside traditional classification depending on the policy area.
Why outsource
The case for a third-party like Cinder rests on two things. Most platforms can't staff deep expertise across every threat category they face; Meta can, but Meta is Meta. And there's an independent credibility argument: using an external provider means platforms aren't grading their own homework when assessing how well their content policies are working.
Red teaming
The Black Forest Labs relationship goes beyond detection. Cinder does red teaming work on AI models before they ship, stress-testing guardrails so companies can identify vulnerabilities before public release. Wise expects formal standards around pre-release red teaming to emerge from the UK, EU, and US as models grow more capable — a regulatory tailwind that should expand that part of the business.
Infrastructure posture
Cinder currently runs a full single-tenancy architecture, keeping customer data isolated. Wise says customers are already comfortable routing content through the same cloud providers they use for everything else, but sees on-premise deployment becoming more common as open-source models mature — and says Cinder is built to support that if customers pull in that direction.
Every deal, every interview. 5 minutes.
TBPN Digest delivers summaries of the latest fundraises, interviews and tech news from TBPN, every weekday.