Resolve AI raises $40M extension at $1.5B valuation to build agents that debug production systems

Apr 21, 2026 · Full transcript · This transcript is auto-generated and may contain errors.

Featuring Spiros Xanthos

Speaker 2: to meet you. Come back come back on soon.

Speaker 1: Yeah. We'll talk to you soon. Have a good rest of your day. We're running a little bit behind, but up next we have Spiros from Resolve AI raising a massive round to build AI that runs production systems. Let's bring in Spiros. How are doing? Hello, guys.

Speaker 4: Good to

Speaker 6: be here.

Speaker 1: Welcome to the show. Sorry we're running a little bit late. Kick us off with an introduction on yourself and the company.

Speaker 4: I'm one of the founders and the CEO of Resolve AI. We're building agents that can help you debug and run production. Think of it

Speaker 1: Yeah.

Speaker 4: As the counterpart to coding agents that produce all this code Yeah. And our agents are there to support you.

Speaker 1: Okay. Are you always or is your customer always, like, deeply in the throes of vibe coding, has rolled out agentic coding across many organizations? Like, who is the target customer? Do they have to already be deep in the agentic coding wave to really get the value here?

Speaker 4: They they don't have to. But the two are correlated. Like, anybody who runs a lot of their system has this problem. The only solution we had so far is humans manually solving it. Right? Using the tools, being on call. Of course, now AI allows us to to automate all of this. Yeah. But I would say, this is true. It was true before. Now with all the AI generated code, it becomes a necessity. Right? So we see strong correlation between the two often.

Speaker 1: Yeah. And and what are what are customers coming to you asking? Is it is it I want the code that's that's, you know, written? We're writing way more lines of code. We want it be more readable, or we want it to be more secure, or we want it be more performant, or all of the above.

Speaker 4: The way to think about it is like, for anybody who's delivering their business through software

Speaker 1: Yeah.

Speaker 4: Look at some of our customers. Coinbase Sure. Salesforce, MongoDB. Right? Yeah. To them, reliability is of paramount importance. Sure. If anything goes wrong and affects customers, it's a big problem. Yeah. So resolve becomes necessarily the first level of defense Yep. That captures any problem that happens in production, but it can affect end users. Yep. Gives you a resolution and a fix, let's say, so you can accelerate that loop. Right? And it doesn't take too much human effort, but more importantly, it doesn't cause impact to customers.

Speaker 1: What is and, like, I mean, the the the company is now over $1,500,000,000 in valuation. What has been, like, the key to growth? Is it just product led growth? Do you have a big sales team? How are you actually scaling the the the business as you scale evaluation?

Speaker 4: Yeah. So this is a very big problem. Right? Anybody who has, like I said, delivered three business software is facing this issue. Yeah. And whether you're a CTO, you know, who pays for, let's say, developers to focus on liability, or whether you're an individual that has solved this problem, you'd rather have AI do it for you.

Speaker 2: Yeah.

Speaker 4: So we've seen like huge amount of demand from day one since we launched the company a bit more than a year ago. Yeah. And we've seen it coming from both big and small companies. We primarily focused on larger enterprises because we think there is a lot more complexity. Yeah. You know, given the complexity of the software. And, you know, most of the growth I guess most of the the the, let's say, the demand comes inbound to us Mhmm. Because it's a well understood problem. And, course, we have, like, both product led approach, let's say, but also sell side approach as we work with large customers.

Speaker 1: Yeah. In in in some ways, like, the naive approach would be, okay. Just point a typical AI agent at the code base and just tell me, you know, where the fault lines are. But I imagine there's some special sauce in the engineering to understand knock on effects that can happen across a large code base. Are you actively working around context windows or or or creating, like, a special harness to understand the the these problems that can come up before they do?

Speaker 4: Yes. So think of it like we have a production ID basically. Right? The same thing you have for your code, we have it for all your production systems. Yeah. Production involves code. It involves, let's say, telemetry logs, metrics like tools like Datadogs, Splunk. Involves AWS, right? So you have to deal with all of these, not just code. Yeah. And then, we also are training our own models now to improve, let's say, the state of the art, let's say, right? How far you can go far enough, let's say, with, you know, a good harness, you know, and a lot of work, let's say, on the on the agentic front. But now, and we just announced together we're funding that we're building a lab to focus on actually, you know, training our own models for this domain.

Speaker 1: Sure. Sure. How what is what goes into getting like relevant data or actually nailing a specific model for this? Because I imagine that you have some great clients. They probably don't want you training on their data. At the same time, if you just grab some open source code, it might not be as complex as like the Coinbase mono repo or whatever they have going on over there. So how do you how do you actually create enough training data to to justify a special model?

Speaker 4: What is important here to understand is like the training doesn't happen like on code per se. Right? What what happens on is actually the action a human takes to perform a task for the most part.

Speaker 1: Yeah. Yeah.

Speaker 4: And we're talking about very long kind of, let's say, planning tasks here. Right? It might take like many many iterations. Yeah. Looking at code, looking at Datadog, looking at infrastructure. Yep. So and this, generally speaking, this this is not in a training set of models. Mhmm. So and software, let's say, generally is a both deep and wide domain. Right? So I think if you actually focus on building a model for the types of problems we're trying to automate and how you run and debug production, I think you have a lot of gains, both in performance, cost, but even like quality of outcomes. Right? And that's our goal. And I I would say, you know, the big labs make it sound, make it look like it's impossible for anyone else to build a model, but

Speaker 1: I don't think that's the case.

Speaker 4: Yeah. And you know, that's what we're seeing ourselves with our investments.

Speaker 2: Yeah. That makes sense. How do you put together such a low dilution round? What what Yeah.

Speaker 1: Yeah. Tell us about the round. I wanna hit the gong.

Speaker 2: 40 on 1 and a half billion.

Speaker 4: So it it is an extension. We we just did essentially the a.

Speaker 1: We just we just did the a at

Speaker 4: a billion dollars like two months ago. Right? And I would say, resolve.

Speaker 1: There we go.

Speaker 2: Sorry. Continue.

Speaker 4: Resolve is essentially in many ways created this market. Right? Like AI for production. Sure. And I think it's well understood by investors. It's also proven given the customers we have. Yeah. So and we're also a very ambitious company. Right? Like, we are obviously trying to build the agents and the models for this domain. Mhmm. And we have a lot of traction. Mhmm. So, I mean, as simple as that. Right? Like, there there's nothing you can do to create a low dilution route other than be very successful in my opinion these days.

Speaker 2: That's a great answer.

Speaker 1: That's a great answer. Step one, be successful. I love it.

Speaker 4: Step one, focus focus a on business. My first startup as a founder. Yeah. Yeah. I made this mistake many times before, right, of thinking that raising money is success. It's not. It follows real success on a product.

Speaker 1: Yep. No. No. That's a 100% right. I love it. Well, thank you so much. Congratulations on the new round.

Speaker 2: Yeah. Great having you on. Congrats on team. Excited to watch you guys come.

Speaker 4: Thank you.

Speaker 1: We'll talk to you soon. Cheers. Have a good day. Goodbye. Up next, we have Carolina Aguilar from InBrain Neuro Electronics building the first inhuman study of graphene brain interfaces.