OpenAI's Alexander Embiricos on Codex: API revenue doubled in a week, GPT-5.5 computer use, and the 'Lord Bottleneck' automation story
Key Points
- Codex API revenue doubled within a week of launch, outpacing any prior OpenAI release, with 85% of OpenAI employees now using the product internally.
- OpenAI's growth team automated a full experimental workflow called Lord Bottleneck that evaluates past experiments, proposes new ones, runs code, and delivers results daily without manual intervention.
- GPT-5.5 powers Codex's computer-use capability by converting screenshots to text via accessibility frameworks, letting knowledge workers automate multi-step tasks like meeting coordination and data analysis across Slack and email.
Summary
Read full transcript →OpenAI Codex: From Coding Agent to General-Purpose Work OS
Alexander Embiricos, product lead for Codex at OpenAI, describes the product's trajectory as a deliberate expansion beyond its coding roots. Codex was built for engineers, but the current push is to make it useful for anyone doing knowledge work — salespeople, marketers, finance teams, data scientists.
The early numbers are striking. API revenue for the latest Codex model doubled within a week of launch, growing twice as fast as any prior OpenAI release. Internally, 85% of OpenAI employees now use Codex. Embiricos says the same pattern is emerging externally.
Computer use
The release that attracted the most attention alongside the revenue growth was computer-use capability, powered by GPT-5.5. The key architectural decision was not to feed the model raw screenshots but to give it text representations of what's on screen, drawn from accessibility frameworks. That makes the model significantly more efficient. The team also put deliberate effort into how the agent's mouse cursor animates between click positions — a small UX choice that marginally slows execution but makes it far easier for users to follow and trust what the agent is doing. Embiricos draws the analogy to ChatGPT's haptic token-streaming on iOS: the animation isn't the fastest path, but it makes the system legible.
“API revenue for that model is growing two x faster than any prior release. Like Codex revenue actually doubled in the last week... 85% of the company uses Codex and we're seeing this happen outside as well... people are making 50% more images with ImageGen two just a few weeks after launch than they were before.”
Easy, hard, automated
Embiricos frames non-technical adoption as a three-stage progression: easy tasks first, then harder ones, then automation. His starter recommendation for knowledge workers is to connect Codex to wherever the company communicates — Slack, Teams, email — and ask it to triage urgent replies, summarize long threads, or surface unfamiliar internal references. Simple on paper, but he says users get hooked fast and build fluency from there.
Harder tasks follow. His own example: instead of adding colleagues to a meeting, he posts in a Slack channel asking who wants to join, then instructs Codex to monitor the thread, respond to interested parties with the calendar link, and add them to the invite automatically. The agent handles the loop without further input.
Lord Bottleneck
The clearest automation story in the segment comes from OpenAI's own growth team. A team member started using Codex to accelerate discrete tasks — analyzing data, writing experiment code, interpreting results, producing decks — each one separately. Over time, those individual steps got linked into a single workflow the team named Lord Bottleneck. Every morning it evaluates past experiments, reviews new data, proposes experiments to run, surfaces them to the team for a go/no-go, then writes the config or code, runs the chosen experiment, and loops back the next day with results. Embiricos says the specific numbers escaped him in the moment, but describes the output as "significant company value" generated automatically through Codex.
Image generation and game creation
Image generation is quietly one of the bigger adoption drivers. Embiricos says users are generating 50% more images with ImageGen 2 just weeks after launch compared to before. Inside Codex, image generation enables end-to-end game creation: the agent builds sprites using ImageGen, assembles the game, then playtests it in an in-app browser. A sample prompt from the GPT-5.4 blog — something close to "build an arcade simulator" — now runs as a full loop inside Codex. Non-developers are using the same capability to generate slide assets and full deck templates.
Goal and the autonomy ceiling
A feature called Goal, currently in the CLI with a UI rollout coming, addresses a friction point that users had surfaced for months. Without it, keeping a long-running Codex task going required manually queuing repeated "keep going" messages. Goal lets a user describe a completion condition and then leave Codex to work uninterrupted — for hours or overnight. Embiricos cites monitoring a model training run as a concrete internal use case where the team needs the agent to stay attentive through the night rather than return a result in five minutes.
The product philosophy behind all of this is to minimize rigid UI and rules around the agent so that capability improvements in the underlying model flow through automatically. The more behavior can be delegated to the model, the faster the product compounds.
Every deal, every interview. 5 minutes.
TBPN Digest delivers summaries of the latest fundraises, interviews and tech news from TBPN, every weekday.