LangChain raises $125M at $1.25B valuation to build the reliability layer for enterprise AI agents
Oct 21, 2025 · Full transcript · This transcript is auto-generated and may contain errors.
Featuring Harrison Chase
Uh anyway, we have our next guest in the reream waiting room. Uh Harrison from Lang Chain. Let's bring him into the TBP Ultradom. How Harrison, how are you doing?
Welcome.
I'm I'm doing well. Thank you guys for having me. Excited to be here.
Thanks so much for uh for hopping on the show. uh would love to get uh a brief update on where the company is. I think I think uh most people are familiar loosely, but I'd love to for you to just uh uh break it down for us and then we can go into the news today.
Yeah, absolutely. So, so we started as like uh so companies lang chain we started as a open source project uh about three years ago almost. This Friday is the three-year birthday so we'll do something fun for that. um and then and then started as a company a few months after that right in the whole hype of the whole you know Chad GPT genai cycle and have evolved from an open- source package into a whole company we have we have Python and Typescript packages and a platform now that's used by a number of sponsors here including Vanta and Finn I saw the commercial for them earlier so thank you
um and uh yeah we're we've grown up into a company
awesome and give us the news today I want to ring Get that gong ready.
Yeah, we're we're excited to announce a funding round of 125 million at a 1.25 billion valuation.
Amazing. Uh let's get into the specifics of of like what uh kind of the core product focus today and and how companies are getting value.
I I'm super interested in in comparing uh how your approach is working relative to there are like no code agent builders. there's all these different building blocks and then then there if you go to YC you can kind of get an agent that's pre-built about anything for as a SAS product. So walk me through uh some of the customer use cases like how are people actually getting value out of Langchain and uh and have they how have they evolved the way they work with Langchain?
Yeah. So so okay so core thesis we think LLMs are great. We think they're going to transform what applications look like. We think they're way more interesting when you connect them to data and APIs and build these systems around them and and that's what we in the whole industry call agents now. Um and we and we think it's quite hard to build these agents. Uh I think you know there was Karpathy did a great interview the the other week where he talked about the decade of agents and how we're not you know super close to AGI and and I think that's totally true and I think it's quite hard to build these agents and so we've really focused on uh going almost down the stack and providing more low-level tooling to build these like missionritical reliable agents. So comparing to some of like the no code solutions which I would argue are mostly used for like internal productivity things we're much more focused on like external facing or mission critical things. Um so there's a lot of uh uh uh I mean Finn's a great example of a customer support bot like it's external facing you're going to have an engineering team building it. The interesting thing we've seen is that we again we're we're primarily like a procode platform.
Yeah. But a lot of the a lot of the evals, a lot of the debugging, a lot of the prompting happens from from product managers and and designers and subject matter experts. And so that's really where Langmith, which is the the the platform we're building, kind of comes in to bring all those worlds together.
Langmith,
what is your token consumption look like? I I imagine that I should think about you as like a SAS company with great margins and stuff. You don't have to go into exact details, but it seems like you're not in the token reselling business. The buy from the token factory, resell intelligence. It feels like you're building more of like the proper shovel for the gold rush.
Yeah, that's exactly right. Uh some fun news though, one of the things we announced today is an agent in our product. Um so it's actually the first kind of like agent in the product. Uh and basically what it does is it will look through all of your agents logs. Basically, one of the things that we saw is that people will put you put these chat boxes in front of people and they ask anything under the sun. You have no idea how folks are using it. And so people want and and a big part of of making these agents work better is understanding how people are using it and then bringing in the right tools, bringing in the right data for those use cases that that people are trying to actually use it for. And so those types of insights typically done by by humans. We're now we now have an agent in the product that will help with that. But yes, generally our token consumption is very low.
Yeah. what are you seeing uh what what's coming down the pipeline on the consumer side uh obviously engineers are using coding agents uh you have agents like deep research that are search focused but uh what what what's exciting to you on on that front
yeah so uh you mentioned
I'm coming from the framework of like the decade of agents and that like what we were sort of maybe promised last year uh that that people said would get delivered this year's is maybe just taking a little bit longer. So one of the one of the areas that I'm most interested in is this idea of what what we call deep agents and basically builds upon the idea of deep research cla code manis all of these general purpose agents that that do run for an extended period of time and are actually the they're quite simple algorithms under the hood but there's a lot of like prompt engineering and context engineering that goes in and and so deep agents is kind of like this agent harness that will help uh power a lot of these more longer running things. And so to answer your question, I do think things like deep research, we see everyone building some version of deep research. Like it's just such a good kind of like product fit. Not only because the the agents, especially for some of the search things, they're they're good at it, but in terms of the the UI UX, I think it's a quite natural fit because basically it runs for an extended period of time, but it creates like this first draft of something. And if you think about AI in general, it's always struggled with like the last mile problem. like it's it's easy to get to 80% 85%. And so if you can if you can come up with like products where AI will do still a sizable amount of work, but you have a human in the loop when it gets to the end and it can review it and it's basically creating what we call like a first draft. I think those types of products are really well suited to be kind of like the the the next gen of what we see come out.
Yeah. Uh with the raise, I imagine growth has been fantastic. Institutional venture partners, they're not uh ripping checks on a whim. Uh, but what's been the secret to growth? Has it just been uh a bunch of great customers that have been scaling up consumption and their lang chain bill is going up and so you're making more money? Have you been moving up market to the Fortune 500? Have you been just onboarding tons and tons of new companies? Maybe a combination, but uh how should I understand like the the way the shape of the business is changing?
Yeah, it it it's a combination. I mean, everyone's doing stuff with Genai. Um I you know we have uh we have geni native companies uh that that are that are big customers. We have big enterprises that are big customers. Every everyone's doing stuff there. Um I'd say uh you know of our of our revenue about like 30 to 40% comes from our self-s serve to maybe give some kind of like idea there of just the split. So it's a pretty even split like I think we um and so we see everyone kind of doing things. I'd say in the past uh it has been a lot of the consumerf facing companies there's kind of this interesting dichotomy of use cases. So consumerf facing use cases are like really high volume but they're generally like shorter interactions because latency is still really high uh uh uh
it matters a lot
and so and so we see like a lot of our usage coming from these consumerf facing companies but then on the other end you have these more B2B companies where the where the aentric workflows are just much more valuable like they're doing way more work and so there's maybe like fewer of them but the the ROI that they're providing is like much higher and so the way that the way that we charge is basically usage based and so a lot of it comes more from the consumer side of things but but we see that there's a ton of value on the B2B side as well.
How are you thinking about the open-source strategy? Uh what what are your role model companies there? I'm always fascinated by this. I know the story of Red Hat Linux a little bit GitLab uh there's a bunch of other data bricks and it feels like open source is sometimes just like a tool that people pull off the shelf is like marketing. Other times it's like the company would not exist without the open source community. Like how do you see that playing a role in evolving over the next couple years?
It's it's a huge part of our company and what we do. The way that we think about things in terms of the life cycle of building agents is there's like a build phase and then and then there's like test run and manage and everything in the build phase we try to do in the open source. And so this is lang chain and langraph. And so I I think like Versell and data bicks are two of the companies that I kind of think are good analogies there. Um and yeah, we want to build the platform for Asian engineering just like building kind of like the the platform for front-end engineering.
Yeah. But uh so like pitch me if I'm using the open-source uh the open source repo. I'm happy I'm scaling and you know that I'm running this at scale and you'd love for me to be a customer. Are you just saying like the downtime will be less or you'll get extra enterprise features or you'll have some sort of forward deployed engineer that can come and jump in do a sprint for a little bit and help me level up? Like what's the shape of partnering with you if I'm already big and scaled on the open source repo?
Yeah. So, so it it kind of goes in the story of build, test, run, manage, and we've kind of built out our product suite in that order. So people get started building like that's where they enter in. And so when they come to us on the commercial side, it's mostly for the testing phase. So the biggest blocker that we see that that people run into is that their agents just aren't reliable enough. Like it's it's really hard to get them working well.
And and so we have like best-in-class debugging and evaluation solutions that work with or without our open source. Actually, that's one decision we made early on is like the the test run manage is going to be separate from the open source. Got it. Kind of like Excel, you can run more than just just Nex.js. JS.
Yeah.
So, so that like testing and evaluation and debugging is the first thing that people typically come for. Then we have the run phase. And so this is really running agents at scale. Um, we we didn't do like deployment for lane chain in the early days because to be honest, a lot of the things that people were building in in, you know, March of 2023 were like really simple compared to what they are now. And so there just wasn't that much new infrastructure that was needed. as we start going into like longer running and more stateful agents, like that's really what our runtime is is aimed at. And then the the manage aspect of it is something that we're just starting to do more of. This is where like our insights agent comes in. So now you have like millions of traces um going into your agent. How do you manage that at scale? And so our our product development has kind of followed this, you know, where this this life cycle. I think we try to I think we try to stay pretty grounded in like where the industry is and what's needed at the time. Do you think reliability will always be one of the key challenges to agents?
I think like making them reliably good. Yeah.
Cuz like right cuz right now it's like okay it's like uh 80% success rate for enterprise agents for a lot of use cases isn't good enough. And then it's hard to get it to 95 and then it's probably even harder to get it to 99. and then the last 1% and depending on the use case the the the reliability maybe matters more or less but it feels like the the age of the age of agents will kick off when we have like highly valuable reliable agents in you know multiple categories but I'm wondering if it will just be kind of like a perpetual challenge because of the nature of LLMs and constantly being in the business of predicting the next token. I think reliability is is is definitely the number one blocker that we see people focused on today. We did a survey about a year ago and it like twice the amount of people said that they were worried about reliability compared to like cost or or latency. And um yeah, I mean I think one of the taglines we use on our website is the platform for building reliable agents. Like we think it's by far the biggest blocker out there. I think uh you know as an industry we're learning like techniques and practices to to get better at it. The models are improving as well. That helps. I do think it will continue to be a challenge and I do think that the best agents that we see out there innovate a lot on UI UX to help overcome some of those reliability issues and put the human more in the loop. Like cursor for example, I would say like their superpower in my mind like they've just nailed the UX of how you interact with these models and what it's like and and and I think a lot of the ambient coding agents are taken off because the UX is is is great because it creates this first draft. And so that first draft like paradigm goes hand in hand with this reliable reliability thing. So we're approaching it from different
Carathy had a great take on this. He was like, yeah, like you can get a self-driving car to 99% accuracy and it's like you you'll only crash every 100 miles. Like 99 of the miles will be safe and then the last miles you get in a crash. And so he was saying that like it's not just n it's not just going from 80 to 90 to 95%. It's like you had need to add a nine. So, it's 99.9999. He really did the meme. Uh, I was wondering about the Carpathy. You mentioned it earlier. You clearly watched Carpathy on Dor Cash. What was your reaction to the the vibe shift around AGI timelines? There's one world where you're like, "Oh, like maybe the froth in the market will cool off. Maybe it'll be harder to fund raise." Something like that is one possible reaction. Another one is like breathing a sigh of relief and that like I'm going to have a job for a long time because I'm building something that's really durable and useful. Like how did you process that? Uh or did it up did it update you at all? Maybe you were already uh thinking all the same things.
I I I think I probably have a boring answer here which is like somewhere in the middle. I mean I think one of the first things Karpathy said in the first five minutes was that I like net right now he's really focused on like what these LLM and agents can do for us and how they can be applied and that's what we really focus on as well like how can we take these LLM and do things and I'm personally not an AGI maximalist. I still think these the the LLMs and agents will transform what applications look like and I think I don't know I think that's like a reasonable middle ground to to to kind of like take. Um, so I I don't know if uh I I don't know if that interview updated a lot of my beliefs on things. I think one of the memes on on Twitter recently has been the idea of that we're in a bubble with a lot of this AI stuff. And uh I think again like for that I have kind of like a middle take as well. Like I think you know we're we're probably in a bubble in in the sense that there's a lot of interest and a lot of hype here, but I think there also will be a ton of value created and I think both those things can be true at the same time. and we obviously aim to be one of the companies that that creates a lot of that value.
Well, it looks like you're well on the way. Congratulations and thanks so much for hopping on the show to meet you.
Great to have you on.
We'll talk to you soon.
Thank you for having me.
Uh let me tell you about adqu.com. Out ofome advertising made easy and measurable. Say goodbye to the headaches of out of home advertising. Only adqu