News

OpenAI launches Frontier enterprise platform and hires hundreds of forward-deployed engineers

Feb 5, 2026

Key Points

  • OpenAI launches Frontier, an enterprise platform that orchestrates AI agents from multiple vendors while unifying disparate corporate data sources to execute tasks without manual tool switching.
  • OpenAI is hiring hundreds of forward-deployed engineers to embed with large corporations and customize models using client data, directly competing with Anthropic's enterprise consulting expansion.
  • Enterprise utility increasingly depends on orchestration and data integration rather than raw model capability alone, as multi-agent systems can chain tasks to exceed individual model time horizons.

Summary

OpenAI is launching Frontier, an enterprise platform that helps large corporations build and deploy AI agents alongside employees. The platform connects disparate data sources—files, code, APIs, systems—so agents can execute tasks without manual data transfer between tools. Frontier orchestrates OpenAI's own agents alongside third-party agents, positioning OpenAI as the central coordinator in enterprise AI workflows.

OpenAI is hiring hundreds of forward-deployed engineers to support the launch. These technical consultants embed with large corporations to customize OpenAI's models using client data and help deploy agentic workflows at scale. Examples include helping T-Mobile build AI for customer service or assisting Intuit in providing tax preparation services.

The hiring push directly competes with Anthropic, which has also expanded its enterprise consulting footprint. Frontier arrives as enterprises face pressure to integrate AI into operations but lack internal expertise to move past pilot projects.

Enterprise data is the underlying asset. Companies hold vast proprietary datasets—customer service logs, code repositories, transaction histories—unsuitable for public release but ideal for fine-tuning or long-context reinforcement learning. Unlike the open-source ecosystem that enabled Claude Code to absorb all of GitHub, enterprise data remains locked behind corporate firewalls. Forward-deployed engineers serve as the bridge, helping companies unlock value from data they cannot freely share.

Frontier's positioning as an orchestrator that accepts agents from multiple vendors suggests OpenAI is competing on platform lock-in rather than exclusive access to models. As enterprises adopt Frontier to coordinate workflows, switching costs compound, even if they layer in competitors' agents.

GPT-5.2 with high reasoning effort demonstrates a 6.6-hour time horizon on expanded software tasks, the highest measurement OpenAI has reported and tracking a doubling rate every 128 days. However, the benchmark may become obsolete if orchestrated multi-agent systems chain tasks together, allowing individual models to run for six hours each while the composite system operates for 20 hours or more. Raw model capability alone no longer fully captures enterprise utility; orchestration and data integration are becoming the differentiators.