AWS's Anthony Liguori explains how Bedrock Managed Agents gives AI agents their own identity, compute, and governance inside enterprise clouds
Apr 28, 2026 · Full transcript · This transcript is auto-generated and may contain errors.
Featuring Anthony Liguori
Speaker 11: What's going on? I'm doing great. Thanks for having me.
Speaker 2: Thank you so much for taking the time to join. There's been a lot of discussion back and forth, and I think it would helpful to start with just like a one on one zero one of what is a stateful runtime? How are you thinking about agentic systems? I think everyone knows LLMs are next token predictors. Maybe they're stochastic parrots. But how are you thinking about the capabilities that come from this?
Speaker 1: Yeah. And then how does that tie into the announcements Sure. Today Yeah. Around around some of the new models being available?
Speaker 2: That's great.
Speaker 11: Yeah. The the best way I would describe it is that we've seen kind of an evolution of how folks interact with these AI models over the last few years. Mhmm. It started out exactly as you mentioned with just token prediction. Yeah. Early completions API from OpenAI was exactly this. You give it a string, it gives you some more strings.
Speaker 2: Yep.
Speaker 11: These APIs have advanced over the years to where that string starts to get parsed and you get tool calls
Speaker 2: Yeah.
Speaker 11: Or you get reasoning or you get other kind of capabilities. And, what we're seeing now with agentic development is that folks are taking those tool calls, executing tools, performing things like compaction or extracting memory. And, this is a super powerful mechanism. This is really the thing that has caused, at least like my career, to dramatically change in terms of productivity and what can be accomplished now. I think the next phase for AI, I truly believe, is taking a lot of what we've learned in the coding space where most of what's happening is happening locally on a person's laptop or maybe their development machine and making that be deeply integrated within the cloud where real enterprise applications live and getting all those same benefits. And so, what we announced with Amazon Managed Bedrock or sorry, Amazon Bedrock Managed Agents powered by OpenAI is a new API that really has three parts. So the first part allows you to create what we call a runtime. And you can think of the runtime as the definition of the agent. Mhmm. This contains skills which tell the model how to do new tasks. This includes, like, tools like MCP servers that you want to configure or other built in tools and a memory policy. So, you can tell it how to maintain short term memory or long term memory or whatever new memory mechanisms might come over time. And then, the next part, and this part is really the special bit, is the environment. Today, most people are using tools like Codex, they're running them on their laptop and the agent effectively lives on your laptop. Yeah. That's good and bad. It means you can do all the things that that person can do, but it also means that agent doesn't have its own identity. You can't create agent specific policies, and a lot of enterprises are trying to figure out how to strike the balance of enabling these agents to work, but also making sure that you can set the right guardrails on them. Environments in Bedrock Managed Agents really solves this. It allows you to give a dedicated compute environment for that agent. It allows you to create specific policies around governance. And most critically, it gives that agent a unique identity within AWS. So, security team or an administrator, you can create policies around what that agent can do. We think these things are absolutely critical to enable agents to be used in real world enterprise applications. Then finally, there's the inference API, the thing that you actually talk to the agent through. That API is very familiar to existing APIs like the responses API from OpenAI. So, a lot of existing applications can just work once you've created this agent.
Speaker 2: Can you can you reflect a little bit as a as a distinguished engineer on
Speaker 1: Wait. Wait.
Speaker 2: Yeah. Sorry.
Speaker 12: Before we before we move on.
Speaker 1: If I'm a an AWS customer today, what should I do to experience what you just described? Because it was a
Speaker 13: lot of it was a lot of a
Speaker 1: lot of a lot of corporate lingo, which I love. I'm an enthusiast. Okay. But I wanna I wanna like, what would be a recommendation of the you know, given given this new set of capabilities, what should I what should I try doing to start? What kind of value might might be unlocked in the organization in a in a maybe a more straightforward way?
Speaker 11: Yeah. Today, Bedrock Managed Agents is in limited preview, and that will expand over the next coming weeks. And so, you probably have to wait a couple of weeks. Got it. But in order to actually use this, it's really as simple as making two API calls. So, you can go into your AWS account and make an API call to create a runtime. Then once you have that runtime, you decide what type of compute you want to associate with the agent, and then you make another API to create that environment. Once you've done those two API calls, that's it. You now have an endpoint that you can take an existing tool. It can be codex. It can be like a web chat based system, or you can just use the OpenAI SDK and integrate it deeply within your application. But it's really that simple. It's just a couple of additional API calls, and then you're off to the races. Interesting.
Speaker 2: How are you thinking about the the different roles of engineers right now? There's been this trend of managers and CTOs becoming individual contributors at labs, And then simultaneously, there's been individual contributors who are effectively becoming the manager of many agents. And so we're sort of going both ways. But where is the highest leverage point for someone who is on the distinguished engineer path, like the ultimate individual contributor, but the world is changing?
Speaker 11: Yeah. I can tell you that, like, in my role, I am coding almost every day.
Speaker 2: Okay.
Speaker 11: I took the day off today to come and do this event, but I'm spending almost all of my time coding. I'm having probably the best time of my career. Yeah. What my experience has been with all of these coding tools is that once you make coding fast, once you make the actual art of generating the code quickly, what really matters is your understanding of algorithms and data structures, architecture patterns, and things like that. I think what you're finding is that a lot of very senior ICs, folks that have a deep understanding of software architecture, are now able to do really amazing things because they understand how to prompt these models in a way to be able to generate new things.
Speaker 2: Is it less frustrating? Are you having the most fun you've ever had?
Speaker 11: I've always felt in my career that I was limited by typing.
Speaker 2: Yeah.
Speaker 11: Like, I don't care. I don't mind typing. I don't mind writing I love writing code.
Speaker 2: Yeah.
Speaker 11: But, you know, you can think about a problem and you can think about solutions. Yeah. And then you're like, okay, this is going to take me weeks to actually implement.
Speaker 2: Yep.
Speaker 11: And that's just something I had always become resigned to is that it's going to take me weeks to solve that problem. Now, I think about the solution to that problem. I do some prompting in Codex. Yep. And I have the solution by the end of the day. And that is so incredibly rewarding as an engineer.
Speaker 2: Yeah. That's amazing. Well, thank you so much taking the time to come chat with
Speaker 1: Absolutely. Us for your Great to meet you.
Speaker 2: We'll talk to you soon. Thanks for having me. Have a good one. Cheers. Goodbye. Up next, we have Colin Zima from Omni. He is the cofounder and CEO building semantic layer powering AI driven analytics. Do I blink? Do you do you not blink?
Speaker 1: Apparently not, but I was blinking at them once I saw that.
Speaker 2: Oh, yeah.
Speaker 1: I was You weren't a bad diet coke with I communicating picking up on the diet
Speaker 2: coke with two two hands thing. They they they demand that they
Speaker 1: Oh, wasn't gonna wasn't gonna do that.
Speaker 2: You you stared them down, said no.
Speaker 1: I saw But
Speaker 2: I oblige every once in a while with the two handed diet coke. If I see somebody who's like, oh, they're not reading the chat, drink the Diet Coke with two hands if you're if you're reading. I'm I'm gonna sip a Diet Coke with two hands. Anyway, our next guest is in