Interview

Pangram Labs' AI slop detector goes viral — 25x user growth and Quora as a major customer

Jan 23, 2026 with Max Spero

Key Points

  • Pangram Labs has grown 25x since July, driven by a Twitter bot launched December 29 that publicly flags AI-generated posts in real time.
  • Quora uses Pangram's API to enforce its no-AI-content policy, routing most submissions through the detector; the company claims a 1-in-10,000 false positive rate versus 1–5% for competitors.
  • Russian state-linked networks are running LLM-generated news sites with 50,000 articles masquerading as local U.S. outlets, embedding pro-Russia narratives alongside neutral content.
Pangram Labs' AI slop detector goes viral — 25x user growth and Quora as a major customer

Summary

Pangram Labs, a two-year-old seed-stage AI content detection startup, has seen a 25x increase in users and query volume since July, driven largely by a Twitter bot launched on December 29, 2025 that publicly flags AI-generated posts in real time. The company, currently nine people, is positioning itself at the intersection of content authenticity and platform integrity.

The business runs a freemium model with a consumer-facing Chrome extension and a $20/month premium tier, alongside an API product sold to internet platforms. Quora is cited as a major customer, routing most of its submitted content through Pangram to enforce its no-AI-content policy. The founder declines to confirm dating app partnerships but signals active conversations in that space.

Pangram's core technical differentiator is a claimed 1-in-10,000 false positive rate, compared to the 1–5% false positive rates the founder attributes to competing detectors. The 75-word minimum for reliable detection drops lower for the Twitter bot, where short-form content is the norm. The adversarial dynamic is explicit: users are already attempting to game scores by iterating prompts until Pangram returns 0%, and the company says it is actively hardening models against GRPO and reinforcement learning attacks.

The threat framing goes beyond spam. The founder points to Russian state-linked networks running LLM-generated news sites with 50,000 articles, masquerading as local American outlets like Oregon regional news, with a mix of neutral content and pro-Russia, anti-Ukraine narratives buried within. This is presented not as a hypothetical but as an active, documented pattern Pangram is already encountering.

LinkedIn is called out as structurally compromised on this front. With AI writing tools embedded directly into post composition and InMail, the platform's trust and safety team has limited ability to use AI generation as a policy signal. The founder describes LinkedIn content quality as having deteriorated materially as a result.

Next product priority is Reddit and LinkedIn bots, though LinkedIn API approval has been pending two weeks and Reddit access is still in progress. Twitter API access costs $200/month at current usage, with the next legacy tier having been $5,000/month before usage-based pricing was introduced. The longer-term product vision is an AdBlock-style browser extension that automatically suppresses AI-generated content across the web.