AWS's Anthony Liguori explains how Bedrock Managed Agents gives AI agents their own identity, compute, and governance inside enterprise clouds
Key Points
- AWS launches Amazon Bedrock Managed Agents with OpenAI, giving AI agents dedicated compute environments and identity-based governance to solve the enterprise problem of agents running only on developer laptops.
- The product requires two API calls to set up and exposes endpoints compatible with existing tools like the OpenAI SDK, enabling quick integration into applications.
- Broad availability is expected within weeks after the current limited preview.
Summary
Read full transcript →Amazon Bedrock Managed Agents
AWS VP Anthony Liguori frames the evolution of AI systems as a three-stage arc: token completion, then tool-calling and reasoning, and now agentic workflows that execute tools, manage memory, and operate with genuine autonomy. The bottleneck he wants to solve is that most of this activity still runs locally on developers' laptops, which creates problems enterprises can't live with — no dedicated identity, no agent-specific policies, no governance layer.
Amazon Bedrock Managed Agents, built in partnership with OpenAI and currently in limited preview, is AWS's answer. The product has three components.
“Environments in Bedrock Managed Agents really solves this. It allows you to give a dedicated compute environment for that agent. It allows you to create specific policies around governance. And most critically, it gives that agent a unique identity within AWS... a lot of very senior ICs, folks that have a deep understanding of software architecture, are now able to do really amazing things.”
- Runtime — the agent's definition, including skills, tool configurations (such as MCP servers), and memory policy covering short- and long-term retention.
- Environment — a dedicated compute environment that gives each agent its own identity within AWS, letting security teams and administrators set specific governance policies around what the agent can and cannot do. Liguori calls this the "special bit."
- Inference API — the interface through which applications talk to the agent, modeled closely on OpenAI's responses API so existing code can integrate with minimal changes.
Setup requires two API calls: one to create a runtime, one to attach a compute environment. After that, the agent exposes an endpoint compatible with tools like Codex, web chat systems, or the OpenAI SDK. Broad availability is expected within weeks.
On engineering leverage, Liguori says he codes almost every day and describes this as the best period of his career. His argument is that once code generation becomes fast, what differentiates engineers is their understanding of algorithms, data structures, and architecture — not typing speed. Problems that would have taken weeks to implement can now be resolved in a day through careful prompting. Senior engineers who understand software architecture deeply are the ones getting the most out of these tools, in his view.
Every deal, every interview. 5 minutes.
TBPN Digest delivers summaries of the latest fundraises, interviews and tech news from TBPN, every weekday.