Vanta CPO Jeremy Epling launches agentic trust platform at VantaCon to automate enterprise security and compliance
Nov 18, 2025 · Full transcript · This transcript is auto-generated and may contain errors.
Featuring Jeremy Epling
next guest. Before we bring them in from the reream waiting room, let me tell you about Vanta. [laughter] Our guest is from Vanta and it just happened.
We'll let him tell you about it.
We'll let you let him tell it. We have Jeremy from Vanta. Welcome to the stream. How are you doing? [laughter]
What's happening?
I swear that wasn't that wasn't intentional. But it did just line up that the Vant ad read went right before you came on. I look over and I'm like, "Wait a minute." Like, uh, I'll let you do the re the ad read. Uh, introduce yourself, introduce what what Vanta does, what you do, and then we'll get into the news.
Yeah. Yeah. Happy to jump in. I'm Jeremy Eping, chief product officer at Vanta. And we help businesses earn improve trust. And one of the really cool things that we're doing this week is we're hosting our Vanticon conference here in San Francisco. Have a ton of people showed up, a ton of engagement to really pull that entire security GRC community together and have a couple really cool announcements. One of them is how we are transforming Vanta to be the agentic trust platform. Uh, I think this is a really big turning point for the industry when we think about how GRC teams are transforming and becoming more technical. We're really redefining how these enterprises manage trust at scale and are able to help big customers like Sneak, Perplexity, Synthesia, all the way from YC startups that maybe just exited a batch uh, you know, recently all the way to the Fortune50 companies really earn and prove trust as a business. It feels like uh AI is amazing, but it's not something people trust. [laughter] And so, how are you how are you grappling with that? Like I mean, people trust it in their Teslas to drive them on the freeway. That's high stakes. But, uh, there are these I'm sure you run into this all the time when you're talking to folks about, uh, yeah, I I I I love it if I'm just looking for a recipe, but I don't know if I'd trust it in my, you know, deep in my enterprise for whatever reason. So, how do you think about how you set up certain guard rails around the AI, which still can hallucinate from time to time? Um, or and then how do you articulate those guardrails to the end user and the customer?
Yeah, definitely. And that's a big problem we saw for companies today. I think whenever they're adopting a new AI solution or maybe it was a solution that they already had and they've just added some AI features, they're wondering how are they using my data? What are they doing? Are they training on my data? We have a whole third party risk management product that comes in. It leverages our Vanta AI which uh when we think about how to hit that quality bar that we care about like you said like hey is it going to hallucinate? How do you approach that? We have a whole set of great GRC SMEES, subject matter experts that help us tune and refine our AI so that we can give really high trustworthy answers because you imagine security customers are some of the harshest critics of AI. They really want things to be accurate and great. And so that's something we have really leaned into. And one of the ways we've kind of pushed that forward is one of the big announcements that we have coming up this week is our AI agent 2.0. So, we've redefined our agent to really be this built-in GRC engineer that understands all the compliance across your entire organization. So, like you said, it knows when you've added a new AI tool. It knows what data you're putting into that tool and how you should think about risks and mitigating those. It also has context and memory. So, when you're asking it questions, it understands what you're talking about. Like, if you're on a policy, it'll pull in that context. It has the memory of understanding what your business is. Maybe you sell to consumers. Maybe you sell it to other businesses. It can pull all that context in across everything in your program as well. Like, hey, we know that, you know, these are your vendors, these are your risks, these are your different customers, you've received these questionnaires, feedback, it can synthesize that all into like intelligent guidance to provide you. So, one of the cool things that I love about it that really helps security teams work against attackers because I think in this AI world, obviously, you have the kind of bad guys and attackers using AI to come in. Um, we also help everyone defend and understand because we know the whole program. We can find gaps in your security program. The AI automatically suggests those to you to like provides gaps and proactive things to go do to go address those gaps and remediate them. Gives personalized guidance and really helps automate a lot of that process. You can respond to attackers and threats a lot more quickly. How how how how does uh like how are you thinking about like the UI around agents because uh so many there's there's been this explosion of companies that are creating agents and they mean something totally different depending depending on the on the company. Sometimes it's like a chat interface. Other times it looks it sometimes looks more like SAS and that's totally fine. But how are you thinking about the actual like evolving UI paradigm?
Yeah, I think it's going to be both. Like I think there's a lot of times I don't want to have just a chat conversation with my AI and I want it just to bring the answers to me automatically. So we look at it as kind of a blend of both. While there might be agents working in the background, you don't always have to do it through a chat interface. So for us, if you show up on like our policies experience, we'll say, "Hey, we found these three inconsistencies across the 40 policies you have, do you want us to go fix those to you?" And you didn't want to have to ask that question of like, "Is there a problem here?" and kind of guess through the list of problems. Instead, we have our agent already looking for those or maybe your SLA says it's 24 hours for critical vulnerability to notify customer in one document. It says 72 in another. We'll automatically do that. give you the change, show you the diff for the kind of like red line for that, let you click a button and automatically execute it. So, I think bringing that stuff in, when I think about when chat's great, it's really when you I don't know, when you have the follow-up questions, you know, where maybe a oneshot answer isn't going to give you what you need, you want to dig in more, you want to learn more, you're trying to explore data. This is a big case for us in reporting where people want to learn maybe about, you know, their controls and how well they're doing, how well they've been performing over time. They can have that interactive conversation with the agent, ask it to pull those statistics, leverage our MCP server through Claude or Chatg GPT and have it automatically generate kind of graphs and charts and reports that they can use for, you know, their board or anyone else to kind of show progress of their program.
How how are bad actors using AI today to, you know, abuse companies in different ways? Yeah, I mean I think uh I think it was yesterday or maybe it was the day before Anthropic uh posted a really good article about attack that they had experienced there and seen that their software used for. I think that it's just giving a whole new set of tools for attackers to be able to probably write more sophisticated attacks and find vulnerabilities even more quickly because they have these agents always running, always looking. Um, and I think that's where when I think about Vanta, where we come in and provide that next level defense because if you think of an attacker coming in from the outside, they can only see what's on the outside. With Vanta, we already know your entire program. We know all the different pieces of it. And so, we can really help you build stronger defenses and be proactive. Like I mentioned, bringing those inconsistencies to the forefront, giving you automatic remediation on specific issues that we might find. We still think it's important to have like humans in the loop for a lot of those big decisions, but you can then work uh with the agent as well to have it take actions just on your behalf automatically.
On the other areas of the risk surface, I I imagine that uh you're trying to build products. Are you also uh starting to act as a as a funnel and do partnerships with other security firms because uh the surface area is probably pretty broad. I do you have a vision to be a one-stop shop or do you want to be part of an ecosystem and and suite of products that enterprise imp implements?
Yeah, I think for us we definitely want to solve the broader trust problem, but we know that there's lots of different pieces where we aren't going to be the full solution, right? So if I think of a GRC team or customer trust, hey, you get security questionnaires and questions coming in from customers, how can we go do all that? There are certain areas, you know, like vulnerability scanning. We're not going to be going deep into vulnerability scanning, but we're going to go partner all the great scanners to go do that. Got it. I think notion though, like you said, of bringing that visibility across the entire enterprise is a really big thing for us. We have a feature called adaptive scoping that when you think of a whole security program, you know, there's little pieces of it. And you may say that, hey, to get compliance with PCI for credit cards, I need to have these assets in scope or things to go do. And that's different than uh another framework I might be pursuing. So we allow companies to kind of see their progress on compliance in those different ways. We have a new organization center so they can break things down by business unit or product line. And these are like just brand new ways that customers have never had before to understand their program at all levels of depth. So when you think about that really large enterprise customer, they're able to break down their program and see that. And I think that's where Vanta really pulls it all together. Um we call it the risk graph is like one of our big announcements that we have coming internally where we pull together internal risk and external risk. So you think about risk you have from your different vendors as well as things you're identifying internally within your business and we provide a full visual for that. So you can kind of get this connection between hey there was a breach. Okay great the breach happened. Which vendor was it? Who has access to that vendor? Vanton can lean in and cut off that access or change the controls there. what data was going into that vendor and it really helps you understand and prioritize all the things that are happening in your security program because I think security leaders are just drowning in alerts and they want to know what's most important. So having the AI intelligence being able to dissect your program in these different ways and then see kind of a visualized risk graph is really important to help them quickly act on, you know, a threat landscape that's just always changing.
Yeah, that [clears throat] makes a ton of sense.
You guys got to do Spotify wrapped for internal risk. [laughter]
That would be good. Something sharable. something sharable internally at companies of course be like
you know yo Tyler you got to you got to you're our biggest risk vector over here the [laughter] negative
Tyler
Tyler Tyler's our intern over here so much he's very secure
he's very secure he's probably the best
uh anyways uh super uh exciting few launches and uh and and have fun at the event. Thanks for joining.
Yeah, have a great rest of your day.
Cheers.
Bye. Uh let me also tell you about Figma. Think bigger, build faster. Figma helps design and development teams build great products together. Uh there's this article in the Financial Times. It's very spicy. It says Oracle is already underwater on its astonishing $300 billion open AI deal. AI circular circular economy may have a reverse MIDAS at the center.
Okay. So they're they're saying this is underwater because the market cap has dipped below. That's so it's like not it's not very honest.
Yeah,
it's not it's not uh
I Yeah, the Financial Times says Oracle's astonishing $300 billion OpenAI deal is now valued at minus74 billion. Like I don't like that at all. Like yeah, this is like really really bad framing in my opinion. Like it's not dishonest.
I thought so too. I thought so too. And I I love the Financial Times. And we have the Financial Times printed out here. Normally normally very very uh great reporting. Um but this one this one feels odd. It just feels like an odd frame.
It's saying or Oracle is already underwater on a partnership. This is a this is a this is a hot take that you've been you've been pumping for the last like week. But the way you've said it is like the stock has roundt tripped even though they had that amazing deal.
What you're claiming is the market is no longer giving them credit.
Yes. Yes. That's right. That's right. But they say that they're underwater. So when I saw this headline, I I I read into it earlier and I was expecting to see something.
Okay. Well, we might have gotten rage baited. We might have gotten rage baited because right here, the Financial Times addresses our concern and says, "Okay, yes, it's a gross simplification to just look at market cap, but equivalents to Oracle shares are little changed over the same period. The NASDAQ composite uh Microsoft Dow Jones software index. So the three there's 60 billion
calling those equivalents is like again like look at
you could you could also comp it to cororeweave and you could say on a relative to coreweave basis Oracle is outperforming a bunch [laughter]
amazing
it's amazing I don't know like there's a bunch of different ways to like if you pick your weird comp uh it does seem a little odd says so the 60 billion loss figure is not entirely wrong or astonishing quarter really has cost it nearly as much as one general General Motors or two craft hinds. Investor unease stems from big red betting its debt finance data farm on OpenAI. Uh with we've we've nothing much to add to that other than the charts below showing how much Oracle has in effect become o open AAI's US public market proxy which is fascinating because uh Microsoft should be OpenAI's public market proxy in my opinion. Uh but there are some great charts in here. There's some interesting stuff. Uh, and I and I believe this is uh this is from Alphavville, which is uh their blog and it's and it's not exactly it is supposed to be like, you know, like like a take factory. Um, anyway, well, we have our next guest in the Reream waiting room. Let me tell you about Julius.ai first. The AI data analyst that works for you.