AIUC launches the world's first AI insurance policy with ElevenLabs, underwriting hallucination risk and data leakage for enterprise AI agents

Feb 24, 2026 · Full transcript · This transcript is auto-generated and may contain errors.

Featuring Rune Kvist

platform that powers it. And without further ado, we will begin our Lambda Lightning round with Rune.

Look at this new. Oh yeah, we're getting new effects going. Welcome to the show. Ooh, look at this.

What's happening?

That is a beautiful lighting setup. Thank you for joining. First time on the show. Please introduce yourself and the company.

Yeah, great to good to be here. I'm Bruno Fist. I'm co-founder and CEO of the artificial intelligence underwriting company.

Okay.

Our mission is to underwrite super intelligence and we do that by building standards and insurance products for AI agents.

Okay. Uh sounds extremely straightforward and simple. Yeah.

Yeah. Plenty of plenty of data to build this on. I mean, yeah. How do you even think the big so the big thing uh Derek Thompson uh was kind of summing up the whole discourse around Catrini

and uh his takeaway was that everyone can agree that no one knows what's going to happen. Uh so very difficult difficult environment to be you know creating insurance products for but I'm sure you're narrowing it down to some key initial use cases. So maybe you can talk about where this starts.

Yeah.

Yeah. Maybe the first thing to say is that regardless of whether anyone buys an insurance product, someone is always underwriting it.

Mhm.

So otherwise it's just going to be the say head of risk at JP Morgan who has to make a go no-go decision.

He also sits with the same problem. Is this going to work or is it not going to work? Uh so the place we start is just what are the risks that are slowing down adoption today?

Mh.

And can an independent third party with skin in the game and visibility across a bunch of companies be able to underwrite that better than any particular head of risk chief security officer might be able to do. Mhm.

Um, and like any other risk, when there's no data, there's an initial R&D phase where we don't expect all of these policies to work out well. We expect to lose some money and in the process start to be able to collect the data that allows us to underwrite this more precisely than anyone else.

Yeah. uh walk us through some of the some of the example insurance policies because I mean everyone who's followed like the AI story and AI race has seen like a million different varieties of impairment from like the training run didn't work or the data center was delayed and that has a financial impact down to we got sued because of our training data or someone used our app and didn't like it. There's a million different ways that you can have smaller even large settlements or lawsuits, but what how do you think about fragmenting the market, finding a uh a landing zone, a beach head?

Yeah, totally. So, you start from what are the very real concerns for slowdown adoption today. Let's take one. We just announced uh the world's first insurance policy for any agent last week with 11 Labs. Uh they are trying to be on the frontier. uh their pioneers of security and and safety. They're trying to be on the frontier of giving asurances. The things that hold up adoption for them are things like hallucinations that lead to financial losses. So everyone has seen a kind of Air Canada example lead to financial damage.

Data leakage uh continues to happen. You'll see on a weekly basis open claw is the latest uh group of that.

You don't want your agents to give medical advice.

Sure. Uh and so those are also some of the kinds of things that are covered. So mostly at the application layer today and then we think as insure appetite grows eventually our mission is to underrite super intelligence. Eventually we think some of the kind of risks that look a little bit more like private nuclear energy will also have to be covered by insurance because these risks cannot sit with no one. Uh there was a grand compromise uh in 1954 that allowed us to do private nuclear energy in America which is the price Anderson act. speak to the government saying, "Hey, we really want some private nuclear energy. That'd be awesome." But also, any particular private company cannot carry the risk if something truly goes wrong. So, we're going to require an insurance scheme that's going to be our way of putting the market to work to manage this in a way that's just progress pro getting this adopted.

And the government has always effectively been the insurer of last resort in some ways, right?

Whether it's formal or not, the government is always the last resort. Take co who's on the hook for that? Ultimately the government has to step in. So the question is can you formalize that a little bit more and say at what limits of liability is the government on the hook and up until that who's on the hook for that?

Got it. Okay. So uh walk us through through the chain of uh how insurance actually works. I understand 11 Labs comes to you and then are you drafting a policy with a specific risk profile, payment pro uh uh premiums and then you're going out to the JP Morgans of the world and having them buy that and or invest that. Does this float? Is this tradable? Can a retail investor get allocation? How does that work on the on the long tail of the financialization?

Yeah, eventually this will end up on Robin Hood. But let me walk you through how it looks today. H so today there are two steps high level. First is uh certifying against the standard

as a way to unlock insurance. So

historically the way every market has been unlocked is that the insurers want to know that the risk is well managed. The head of risk at J Morgan doesn't want just financial coverage. He wants to make sure that there's no instant that gets them fired in the first place.

And so we've developed a standard. It looks a little bit like a Moody's framework or a SO 2. Uh so that is all open source and public. It's 50 requirements that any AI frontier company must meet to meet the standard. And as part of that we run a bunch of technical tests basically crash testing red teaming as you might call it here uh which gives us a score uh and we give them pass fail certificate and then this score feeds into a policy that we've designed with uh some of the leading insurers uh where a company like 11 Labs gets to specify hey what are the top three four five risks that hold up at option

they buy a policy for that and today that that risk is held uh by traditional insurance companies again this is actually all about trust so you really want the old insurers that have it on their balance sheet. They always pay. Uh over time as we move into this kind of like Chernobyl types risks, uh we will run out of private capacity. We will have to at some point uh create catastrophe bonds. Those will be traded on the public market. Probably not on Robin Hood by but by more sophisticated investors. That is the ultimate the way to build enough market capacity to cover the tail risk.

Yeah, that makes sense.

Very very fascinating.

Uh what does the business look like today? This feels like high stakes work, but is it capital intensive? Is it do you need a thousand insurance agents at some point? Like uh what's the team like? What's the fundraising like? What's the business like?

Yeah, totally. So, the way to un if you're really thinking about this long term, the way to unlock the insurance market is to get the standard universally adopted.

And that is kind of what allows everyone to say, "Hey, this risk is well managed. We can now start to to price it." And so we have for the standard we have about 100 security leaders from the one fortune 1000 who meet with us every 6 weeks to input into the standard as representing their interest and now you're having some of the leading AI companies like 11 labs intercom UIP more to be announced soon that have set put themselves forward to say hey we're pioneers we would like to have an independent audit to prove that that's step one. So that's what most of our work is focused on today. Um, and then on the insurance side, the the way to start is to partner with existing insurers that bring that trust credibility. They're frankly so old school and that's what that's what brings trust here. They don't take they're not. That's the whole point.

Yeah. It doesn't really it doesn't really work if you're It's like, okay, who's actually backing this policy? And then it's like, oh, a company created

it's like me.

Exactly.

Don't worry.

So, it's actually quite caving light to get started. We raised $15 million from that Freeman last year. Um, and

almost ran into the the goalpost.

There you go.

Move them again.

Well, thank you so much for stopping by the show and giving us the update.

Yeah, a lot a lot uh more questions as there are new kind of crises around agents. Feel free to pop back on talk about it.

Amazing. We'll talk to you soon.

Good to meet you.

Good to meet you, Run.