News

Timeline reactions: Robinhood launches social trading app, Dalton Caldwell's Standard Capital, and AI doomer discourse

Sep 11, 2025

Key Points

  • Robinhood launches For You, a social trading app letting users share real profit-and-loss data on options and crypto positions.
  • Standard Capital, an AI-focused Series A fund by Dalton Caldwell and Paul Buchheit, commits to funding 20 companies yearly in batches of five on predictable quarterly cycles.
  • Tech circles are moving past AI doomism toward narrower safety concerns like deepfakes and psychological dependency, though aggressive restrictions risk creating development stagnation.

Summary

Robinhood launches social trading

Robinhood launched For You, a social trading app that lets users share positions with real profit-and-loss data across options and crypto. The product fits into CEO Vlad Tenev's broader product expansion strategy.

Dalton Caldwell's Standard Capital takes batch model to Series A

Dalton Caldwell and Paul Buchheit launched Standard Capital, an AI-native Series A venture firm that funds five companies per quarter. The fund operates on a predictable cycle of four batches per year, funding 20 companies annually with checks likely in the $5 million to $10 million range. Standard Capital takes only 10% equity, which appeals to founders with meaningful revenue who want to avoid pressure from larger Series A rounds.

The batch model gives LPs clarity on exactly how many companies will be funded over a defined period. The structure also addresses a problem at Y Combinator, where downstream investment from YC's continuity fund created ambiguity about whether it signaled insider advantage or external investor skepticism. Standard Capital operates independently of YC and sidesteps that tension.

AI safety discourse remains unsettled

Eliezer Yudkowsky's new book "If Anyone Builds It, Everyone Dies" has drawn pushback over its policy prescriptions. The book proposes restricting GPU ownership to eight or fewer top-tier chips per entity without international nuclear-style monitoring, with nations "prepared to enforce these restrictions by bombing." Meta currently operates roughly 350,000 such chips. The New York Times review compares the book to a Scientology manual, noting its interspersed parables and QR codes.

Tech circles show signs of moving past dismissal of AI doomers toward selective acceptance of safety research when tied to concrete harms. Near-term concerns now include geopolitical manipulation via deepfakes, users developing psychological dependencies on AI, and LLMs enabling weapon development. This framing doesn't require belief in superintelligent paperclip maximizers but asks whether near-term harms merit investigation.

The nuclear power analogy offers a cautionary structure. Early doom warnings accelerated both development and backlash, eventually producing a bifurcated outcome where nuclear stagnated for decades but remained viable for future adoption. AI restrictions risk a similar pattern. Regulatory capture or geopolitical coordination that halts development could create stagnation alongside existential doom as plausible outcomes if policy responses become too aggressive or too consensual. Neither outcome is inevitable, but both remain possible depending on how policy unfolds.