Interview

Applied Compute's Yash Patil is building continual learning AI agents that absorb institutional knowledge

Feb 26, 2026 with Yash Patil

Key Points

  • Applied Compute raised $80 million to build AI agents that absorb institutional knowledge from enterprise systems and continuously improve through real-world feedback rather than offline evaluation.
  • Ninety-five percent of enterprise AI pilots fail because systems don't adapt like employees, a gap Patil says current deployments fail to close by treating AI as workflow automation bolted onto legacy infrastructure.
  • Applied Compute ranked first on corporate law in Mistral's new Apex agentic benchmark and is working with Cognition on custom model training for coding, targeting verticals where decades of institutional knowledge compound competitive advantage.
Applied Compute's Yash Patil is building continual learning AI agents that absorb institutional knowledge

Summary

Yash Patil, CEO of Applied Compute, raised $80 million last year to build what he calls "specific intelligence" for enterprise. General-purpose models improve weekly but deliver no competitive edge. The real opportunity sits in decades of accumulated knowledge locked in enterprise systems, documents, and people's heads. Patil wants to capture that knowledge and train AI agents directly on top of it.

Most enterprise AI pilots fail because the systems don't adapt to feedback or improve with use the way an employee would. An MIT paper found 95% of enterprise AI pilots fail for exactly this reason. What companies have deployed so far is mostly "RPA plus" — workflow automation with models bolted on. Genuine cognitive work requires going further.

Applied Compute sells the full stack: integrations into enterprise systems of record, a proprietary reinforcement learning post-training pipeline, agent infrastructure including tooling and permissioning, application layers for human-agent interaction, and observability across all of it. The differentiating layer is closing the feedback loop. Rather than relying on offline evaluations, the platform captures signal from real tasks as agents complete them and uses that to continually retrain the models. The approach looks at the artifacts knowledge workers produce and trains models to replicate that quality.

Two days before this conversation, Applied Compute published collaborative research with Mistral benchmarking against a new agentic evaluation called Apex, which covers professional services domains including law, investment banking, consulting, and management. Applied Compute ranked number one on the corporate law subdomain and around fourth or fifth overall.

Target verticals are financial services, insurance, healthcare, and biotech. These sectors have accumulated institutional knowledge over decades and treat data quality as a genuine differentiator. Applied Compute is also working with Cognition on custom model training in the coding domain.

Patil describes his RL-based approach as his current best tool, not his final answer. The technique is still nascent. Andrej Karpathy and others have flagged how simplistic current RL methods remain.