Ramp launches first AI agent for corporate expense management, automating approvals using calendar and email context

Jul 10, 2025 · Full transcript · This transcript is auto-generated and may contain errors.

Featuring Karim Atiyeh

but I think in general like biotech's hurting right now uh we want to make a world where it's actually like engineering right and that you're actually just getting these like really scalable able amazing medicines and like I think that's where we're headed. Incredible. Well, thank you for joining. Great to have you on.

Yeah. Awesome. Thanks. Talk to you soon. Have a good one, Ellie. All right. Bye. See you. And next up we have Kareem from RAMP. The man himself coming in to talk about the launch. Ramp's new agent launch. Uh is he in the waiting room? We will bring in Kareem from RAMP to chat.

Second time on the show he hopped on at uh Hillen Valley. That's right. First time as a remote guest. Great to see you. How you doing, Kareem? Hello. It's great to see you guys. Can you hear me? Okay. Yes. Loud and clear.

I don't think you need much of an introduction, so why don't you just kick it off with uh the announcement and break down the launch today and then we'll have a bunch of questions. Yeah, of course. I mean, it's been a a very exciting day for us at at RAMP. We uh finally announced our uh guess our first agent.

We're going to be announcing a lot more agents soon, so it's hard to keep track sometimes. Um we've been playing with a lot of tech internally.

We think we're in a very uh interesting space where maybe maybe the thing about it is a lot of people from the outside look at ramp and think of maybe visualize the card they think about the um the fintech aspects but at at the end of the day like what we're really trying to do is um help reduce the drag uh on companies uh that happens when there's just a lot of work and between teams and a lot of like papers being passed around, questions being asked, uh, the things that really get in the way of of of doing work.

And that first agent that we're building is is is really just that like it operates in the me the messy middle between finance teams and every other team trying to spend to move the business forward. Um, and yeah, that's that's basically what we launched today. So, it's a um an agent for controllers.

Um it knows a lot more uh about the expense policy of of a company, the rules that are in place that govern spend uh than any single employee and it knows uh a lot more about every single transaction than any single person on the finance team.

So it can operate in the middle and automate all the little decisions and the extra work that needs to get done to figure out uh what's in policy or not. And it's immediately available. Like that's part of the power, right?

Is that if you're if you're if like in in the in the sense of like if an employee wants to decide whether or not they can buy something or or something's in policy, you no longer have to be slacking somebody, you know, it could be in the middle of the night or something like that or off hours where there's creating that drag, that delay, right?

100%. Uh that that's certainly one part of it. like you can you can ask questions about your your your policy and ask questions about specific transactions live to figure out whether they would be in or out of policy.

But more interestingly once you make a transaction it's already doing work to go and figure out like well that transaction that you made at that restaurant um it looks big but if that was a dinner with 10 people maybe it's not as bad as initially thought and that's actually in policy.

So well that information is in your calendar, it's in your email, it's in um sometimes outside of just the immediate context of a transaction.

So the the the agent will go out on the internet in some cases uh contact uh um vendors or pull data from APIs on your behalf to really gather all that context and make better decisions um on behalf of of the company.

I I want to talk about like this like the the word agent and the decisions to like like how how agents are fitting in the different stack of a tech company these days because like there is kind of always there's kind of always been an agent behind the scenes working you we think of these as like cron jobs before it's like there's a there is a longunning process that when a receipt comes in it gets tagged and there has been for I think years I don't want to share anything you can't but like there's been an LLM interacting with receipt data for a long time, but it's been fully agentic in the sense that it was behind the scenes.

And so I' I've been thinking about this in the context of like meta and like some of the value that Zuck is going to be getting from having a frontier AI model.

It's like there's so many workloads inside a business that has billions of of users that just happens behind the scenes and and these are agents, but they're like almost internal agents. And so I'm wondering about your decision to Yeah.

position position an agent as like this is a userfacing agent versus something that we're just going to have a process that's running behind the scenes entirely. Well, there's a bit of a difference, right?

Because when you think about the these processes running behind the scenes for the most part, like the code is pretty deterministic. The tools are the same. It's built for accuracy and auditability and you have a high confidence.

it could trace back the the path that uh the the old school agent, let's call it that, went went through. Exactly. And in this case, like it's less deterministic. You give the agent a set of tools.

You could tell the agent or you can essentially give it access to um let's say ability to call, ability to email, and you could be like go figure out a way to get the receipt. That's what you know about the restaurant.

and it will browse the web and figure out that that's the phone number of the restaurant and then try to call the restaurant and if that doesn't work then it will try to email the restaurant until it achieved that goal of of getting you the receipt or it fails and you can then interact with it.

Um in in this case like the instructions that we are giving the agent as we're building it are very high level.

you're just giving it high level instruction um and access to tools and that's very different from like the the old way of building these these processes this these processes in which you had to be like very specific about all these paths.

So it would take a lot longer to build to build these systems to debug them to update them um etc. We lost you your your zoom background is turning like like a ghost. It's very funny. It's a super a super intelligence. I think you just need a little bit more light light on your face.

We I think we actually I I I actually lost power. But wait, you lost power. There we go. I'm back. There we go. That's wild. Much better.

Um I want to talk to you about the data walls that are going up and some of the battles that are playing out in like the enterprise uh world because uh when when I when I read stories about, you know, companies that want to do like enterprise search, you can see that well, you know, maybe Google doesn't want you to be taking maybe they want that for themselves.

RAM's in a very different position, but at the same time like there's just evolving policies about, you know, how friendly this is a classic with like Amazon not sending the itemized receipts to Gmail because they just didn't want to give Google the the data.

Um, but as a Ramp customer, I want the Amazon details pulled in through via Gmail via the ramp integration.

So talk to me about like how's the broader trend playing out and then how do you go to big companies and say hey like you know work with us our clients want to be able to pull data from your service and we're not going to build a delivery network Amazon so you're not a you're not a we're not a competitor for you. Yeah.

No for sure. I mean, mo most of the data that that we need at the end of the day is is like data that is quoteunquote owned by our users, the businesses that are on ramp, their employees. Yeah.

Um I think it's a little bit easier to to to operate in the B2B space because like those uh I guess what what what what governs who owns the data and whose data it is is a lot clearer than in in the uh uh a lot of consumer applications.

So like in in our case it's like what data do we really need to know to in in in the case of the agents that we just launched to figure out whether something's in policy or not. It's metadata about the transaction, right?

like um what's in the receipt at the end of the day like stores owe you a receipt that's your receipt right we we get that information you have information that we get through the networks through Visa the the the metadata about the trans the geographical location of the transaction maybe whether it was an inerson transaction or not there's data that's in your inbox in your email which again like that information is also owned by the company um we we haven't really encountered heard um a lot of of of push back and challenges.

I I found most of the challenges in in getting the data to be more like technical. How do you make sure you get it quickly clean it up and get it accurately as opposed to ones where there are third parties that are trying to make it harder and harder for us to access the data. Mhm.

U been in that in the previous company that that was kind of the story of the previous company. Yeah, of course. We we had a lot of these problem.

I mean there was there were lots of funny moments at at Parabus or previous companies where uh we were uh I mean we're really building an agent for consumers to help them save money on their online shopping right and we're trying to log in on their behalf to Amazon accounts and Walmart accounts etc.

And of course they'll put blocks, they'll put captures. And today those captures seem like a joke. I think uh any version of any half recreent version of of Chad GPT or or Claude is able to solve those captures very easily.

Well, that's one of the ways the internet's getting worse right now is the captions are actually getting so hard and annoying. Yeah. When we go to the gym in the morning, Jordy has to log in. Takes him like 2 minutes to get through the captures for this gym.

It has like the most like military grade security to get to a gym login and it's just it just gives you a barcode that you just scan.

But but but but on that I'm I'm actually interested because I can imagine you know Ramp has tens of thousands of customers like highv value business customers and other people that are building agents I'm sure would love to actually be able to make actions on the ramp platform but at the same time you guys are trusted to handle the finance you know basically the finance the finance back office for these companies and you don't want like an agent like hallucinating like saying, you know, based on and taking actions on the ramp platform.

So, I'm curious um how you see that that dynamic playing out because I'm sure you've been approached by a lot of companies saying, "Hey, we're building this agent to do this thing. We'd love to be able to, you know, get authorization. " Of course. I mean, we're we're thinking through that um a lot right now.

I think there are good ways of exposing the right information to the right agent as long as um our customers are very aware of what what they're exposing and there are a lot lots of interesting applications for us to work on like in in the case of uh any large purchase at a company there are multiple uh parties within that company that need to review it or approve like you want to review a certain vendor and and and look at their uh data protection policies.

You want to look at the the legal agreements. In some cases, you want to negotiate the price. And you can imagine a a day in the future where uh a lot of our customers have an agent tool that they trust or agents that they trust for legal work, agents that they trust for IT work, etc.

And we're uh very interested in in in uh actually uh working with with some of these companies, but we got to figure out on our end how we expose the the right interface uh so that we're we're ensuring really like the the security of the data of our of our customers.

So uh we it's it's an ongoing uh uh discussion work stream. Yeah. Uh last question for me. Um the the the Gro 4 launch was very benchmarkheavy.

Um it seems like you know the consensus is that uh it's a good model and so as soon as I see that it's now about cost per token and and so I want to hear from your perspective what drives decision- making how big of a line item roughly like or how much time is spent thinking about LLM inference optimization at your scale like like roughly you know like how big of a deal is it and then and then what what is the workflow to decide can we use a cheaper model?

How do we do you have internal benchmarks? Are you just checking these things? Like how are you making decisions about which model to use for what problem? Yeah, that's a great question.

I mean, I'm a lot more paranoid uh about being too slow to try uh the newest model than and and uh the latest and greatest tools than I am by uh maybe over spending a little bit in in in in one area. Sure.

I mean, the amount of time and money wasted at companies doing BS work is just insane that uh uh if we're debating whether you can make something faster by spending extra dollar or half dollar like the value that we're able to create is so big that I don't worry about it too much.

But we do have internally um uh somewhat imprecise like stack ranking of the different places where we need to make inference calls. And in some cases, they're very simple, high volume, um kind of low risk, right?

Like you're you're trying to um normalize or clean up some like merchant data to figure out the appropriate spelling and maybe the right like photo to use. It's not the end of the world if it's not not like perfect. Uh we're doing it at high volume. It better be cheap.

So we have a kind of stack wrecking of like this is something high volume where we need to be cheap. this is something that's low volume and high stakes where you need to be accurate. And we'll generally try the the newest and greatest models in in the places where we think will make the biggest difference.

And over time like we'll break up some workflows and some parts of it will become uh cheaper, more repeatable with uh smaller versions or cheaper versions of the model and and and it will just evolve.

Um, I mean we come from I mean I remember like micro optimizing every single thing on our AWS account back in in in 2014, right? Like we were it was it was a lot harder back then like um I I think we we also pride ourselves in in being the the time and money company.

So we do care a lot about making sure that we don't waste our own money and our and our own time. But I would say that the TLDDR is like our time and engineering time is the most valuable thing here and I'm a lot more focused on that than than anything else. Yeah.

On on the time issue, uh what do you think about uh the various latency tradeoffs? I'm sure if if if a if an employee wants to know is this in policy and you hit 03 Pro and it waits 10 minutes like they're probably just going to slack their manager and ask them.

Um but you're going to get a really accurate answer that's really detailed. And so how do you think about those trade-offs in latency? Yeah, I mean it really depends on like where where in the workflow are we making that inference call, right?

Like if it's live in the interface and the user expects a quick answer, we'll be using some of the faster models. Sure. But the reality is like a lot of these agentic workflows that are being kicked off at ramp like happen behind the scenes, right?

Like you make a transaction, you maybe get a very quick question from Ramps AI to gather a little bit more context.

like that's enough and then from there it'll kick up another kick up kick off another task that can be a little bit slower that'll happen in the background and by the time it reaches uh a bottleneck uh or it'll reach a place where it needs like additional feedback uh it'll be in someone else's like notifications or on someone else's Slack like you could take a little bit of time when like the work is going from one person to another person but less when it's like the same person interacting with the interface Um that's yeah that's generally the the thinking but cool yeah I have I I tried some of the the the the newer brow the newest browsers today and and like I I tried Comet today I tried a couple weeks ago and I think what they're trying to do is incredibly cool.

Uh, but like I I often find myself thinking like, damn, like I wish this was a little bit faster. And I know I know it's coming, but I think unlike some of the the the browser agentic calls like you want it to be really fast. Yep. Yeah. I was thinking about that in the context of the of the OpenAI browser.

And unless they figure out something that makes it basically 10 times as fast, I'm still going to default to to Chrome if I have both of them open just because I'm like, well, I just really need a fast answer here. That was the Chrome innovation, right? Chrome won on speed.

Like they just went and optimized the code and and they and they nailed speed and it was enough to leapfrog.

Uh and and so you could see I mean that's like the bullcase for like Apple coming from behind is like yeah they like it feels like if if XAI and Enthropic and and Open AI are all kind of like r and Gemini are like roughly at the frontier.

If you can just get something that's at that frontier, not any new innovations, but hyper optimized and it runs locally on your phone and and it's put spitting out like tons of tokens every second like you have a product that would be very very it would be very rapidly adopted. Uh it's exciting. It matters a lot.

I mean, I think one of the like weirdest UX patterns on on on Shad GPT now is that I have to do the work to figure out whether to use uh 03 or 03 or 40 every time.

Do I have 10 minutes or do I want the and and 40 is always so good that I usually don't need to, but then I'm just like, well, I want the best of course and like I'll come back to it. And it's such a weird paradigm. It's going to be something that dates us.

And I just know our kids are going to be like, what did you have to do back then? You had to rewind the VCR tape. You had to put the disc in the Xbox. You had to pick which model to use. This is insane. It's so legacy and it's going away. But we're just in this weird like we we don't have a model router solved.

And it feels like the easiest thing is like which model should we use for this? I don't know. We'll see. Yeah. And if if I mean I I don't know if you guys I I I grew up in Lebanon. I still remember the the days of dialup where Yeah. You would have to uh uh kick everyone else off. Select Well, exactly.

Well, select the phone line. In our case, like, okay, which phone line am I going to use? Like, I don't know. Can't you tell me which one is free and like pick it for me? And like, no, I still have to, right? Yeah. It seems like the easiest thing to do.

And also I mean well this is just uh you know complaining about the app that I use 30 minutes a day at least uh chatbt but um I I almost wish I could just define it in the prompt and just say hey you use 03 pro and then here's the prompt as opposed to needing to click the UI change it switch it and then pick instead of just being able to go back and forth and I don't know I mean it's it's a good sign because like like people are using the stove so much that they're frustrated by these like niche UI things.

So, you know, it's an exciting time. There's a lot of I forgot who who it was who who posted on on X. I think it was like a couple weeks ago that like every company is like one great UX breakthrough away from something amazing. And I think that will be true for a long time.

Like there's a lot of alpha right now and just great UX and and good patterns. Uh we we haven't figured it out. We're still in the maybe terminal phase of of personal computers, right? Like when is the mouse going to come out? when are the right gooies going to come out?

Like there there's a lot of that happening right now and yeah, it's a fun fun time to be building. One one last question for me. Uh on Monday, uh Dwaresh released an article and then came on the show kind of talking about his timelines around, you know, when an AI agent would be able to do his taxes, right?

Sort of like aentically basically like fully agentic experience being like I want to do my 2025 taxes and then it just sort of autonomously runs.

How do like big how are like you know uh Fortune 500 CFO like what what are their timelines around um maybe maybe you just tell them what the timelines are like okay by by 20 2028 you know we're going to be able to do this for you um but but h how is the the the sort of um finance arm of the seauite kind of anticipating uh like the rate of advancement obviously like the agent today is a step towards that future, but you'll obviously need a variety of different agents or or Well, I I think in terms of capabilities of LLMs, we we we're there.

We we have the capabilities like the the bottleneck on on on being able to to do this today is like having the right context, right? It's it's uh well some of that context is in my head. So the AI needs to know to like ask me the right questions efficiently so I can answer those.

But like even when I'm working with my accountant like pick the best accountant in the world for your personal taxes. If you just tell them like file my taxes, they can't do anything. Maybe you tell them file my taxes and here's access to my email. They could do a little bit more but they can't get it fully.

Just tell them like, "File my taxes. Here's access to to my email. You can call my the wife as much as you want. Uh you can look through my drawers and you give it more and more of these things. " Like maybe it could do it, but it's going to get lost. It's going to take forever.

And and and really what we we we need to do even even for businesses is like what are the right like patterns for us to extract context that's in in people's head organize it um get them comfortable with uh connecting different tools like your inbox and and and things of that nature.

And I think in terms of tech and capabilities we're we're there. We're not we're not really missing anything. So, there's a lot of UX and buffs I like. Yeah, we almost agent that can email me a question and put it in my inbox, which is effectively my to-do list. And that's what my accountant does when that taxes happen.

They email me and say, "Well, that's why this is cool. You can just you can take a picture of a product and and ask it if I buy this, you know. " Yeah. Is it in policy? Yeah. 100%. Yeah. Uh well, thank you so much for stopping by. This is great. Well, we'll definitely see you soon, Kareem. This is great.

To the whole team. Talk soon. Talk to you soon. Bye. Um, and that is um the rest of our guests. We are through that. In other news, uh, Periodic Labs, there's this scoop from Natasha Muscerinus.

Uh, the startup being co-founded by Liam Fetus and Eric Dogus Kubuk, great names, is in talks to raise hundreds of millions of dollars in funding at above a1 billion valuation. And the two-month old startup is looking to apply AI to physical science, starting with discovering novel materials. Wow.

Let's give it up to the two-month old unicorn. We got to have these guys on the show. That is extremely fast. Uh I also like this post from David Pell. We're we're getting him back on the show ASAP. We had a lot of fun talking to him a couple months ago.

Uh he said, "I'm touring apartments in New York and just about every new build has the same soulless aesthetic. Flat walls, white paint, no cornises, no ornamentation, just a room in a box.

only one uh one real estate agent said to me, "If you want something with character, you're going to have to stick to pre-war buildings.

" Look, I'm all for some efficiency gains, but we've created a world where new things are soulless things, and that's how a society as modern as ours, and that's not how how a society as modern as ours should function.

Intuitively, you'd think that a wealthier society would build more would build more beautiful things, but not ours. And I completely agree.

What's crazy is that this isn't I mean I don't these apartments look nice um but this continues all the way to $20 million houses that are still bland and I think it's I think it's mostly because maybe time and and and uh and all the difficulties with permitting because if you are if you're if you even have the resources to build something from scratch creating okay I want these ornaments and I want this and I want like something that's really expressive of my personality.

Well, now you if you want that, no one else wants that. So, you have to build it and you have to and you have to, you know, underwrite it and you're going to be underwritten to code.

You make sure it's to code and then get it built and and then and then the secondary market value is going to be less because not everyone wants Hurst Castle.

Whereas, if you build if everyone builds the exact same thing, they're perfectly it's perfectly liquid market because every every apartment is interchangeable with every other. Yeah. So good point.

It it's kind of it's kind of a function of just like uh modernity, but it's more a function of uh people not, you know, just risking it ever on building a disaster project, making their forever home. People learn the lesson of uh William Randph Hurst too much. They should have just like never learned that lesson.

Just ripped it and just send it and just build something that no one else will want to buy and will take decades to build. That's always the best. Well, I have a good place to end it. Uh Rob Petroso says the original Hermes Birkin bag prototype just sold for $10 million at SE. There was a two-minute standing ovation.

He says bull market confirmed. We love We love a bull market. The original prototype. Fascinating. That's wild. Makes sense. It's uh incredible lore and uh I wouldn't be excited for a bull market in and alternative assets such as Birkens. be great and uh you should be too. But uh that's a great show folks.

We will be back tomorrow. I cannot wait. We will talk to you tomorrow. Have a good day. Cheers. Bye.