Dan Lahav on Irregular's emergence from stealth: building the security stack for the AI agent era

Sep 18, 2025 · Full transcript · This transcript is auto-generated and may contain errors.

Featuring Dan Lahav

going to be a lot of rockets going to the sun.

Well, we have Dan.

Let's bring in our next

show from regular one.

Welcome to stream. How you doing?

Hey, it's such a pleasure to be here. I've asked Seoa what to prepare for and they've told me that this should be the most fun, exciting, fast-paced interview. So, I'm, you know, it's like, shoot at me.

Let's keep it quick. Introduction. What do you do?

We are irregular. We just came out of stealth yesterday. Uh, we are the first frontier AI security company out there. Our goal is to be the counterpart to the open AIS and anthropic and GDMs of the world. We're already working with all of them in to create the security stack of the future.

Yeah. what what what does that mean? I mean, there's already a ton of security companies out there. I see them when I run walk through the airport. Um, they're all thinking about AI. How are you positioned differently?

Yeah, it's a great question. So, the the thing that we're actually building is a highfidelity simulator that allows you to put in any model in it. And essentially just like see how different scenarios in order to attack the model or whether the model can attack other targets. So for example, we were the first in the world to see jailbreaking that AI was doing to another AI or alternatively just like seeing how AI can bypass things as Windows Defender in order to just like hop from one endpoint to the other.

Interesting.

What is unique and different about us is that we're working very closely with the labs out of the assumption that what AI is doing right now is simply not a story and and security is about to have a huge paradigm shift moment which if you think about it makes sense, right? because enterprises are probably going to look very different in the next 5 to 10 years. So naturally the security stack is also going to look very differently as well. So we are using this high fidelity simulator to find the the novel attacks and to build the next generation of defenses a few years in advance.

Why do labs don't want to do this internally? It feels like something that they have responsibility for. They have tons of I mean they get questions on Capitol Hill about this. Like if they outsource it that seems like a very very tricky thing.

Yeah. But at the I mean it's not it's not entirely outsourcing it like they have to care about these things. The same thing that you know I think every every company does. It's like you want to have your own security practices and protocols but simultaneously have partners that can help you see things in a different way. But

what do you think D?

Exactly. It's it's a great question if you think about it. You know there's a thriving security industry already that is very mature and you know most companies are using you know the great of the security world right now the palos you know just like the cloud strikes etc. even though they're external to the companies. And the reason is that we're about to encounter what is potentially the greatest security challenge ever just because we're innovating at such a fast pace and there's so much work to do. So we kind of think about it in terms of like differentiation from the inside to the outside is that there are some defenses that you would want to put on the models themselves baked into the no nets and that's clearly lab territory but some defenses are going to happen on the AI agent side and some defenses will have to be in the environment just because if you believe that we won't be able to do secure by design to AI that means that some of the defenses will have to be implemented on the enterprise side in the environment otherwise ize you won't have any defenses beyond what the front tier companies are going to bake into the models. And because there's so many different verticals and scenarios and contexts that you need to put in, there is a lot of effort that needs to be created in order to create defenses across the entirety of the stack. And we're working side by side in order to make sure that whatever the labs are not doing, we are going to do in order to create the next powerhouse of security.

What is this uh what does a regular look like over time? Is it is one way to think about it is like a a a network to like detect like uh like rogue agents, right? Is that is that a potential kind of scenario that that you guys would be uh helpful in in preventing or or what what is like the surface area of the product look like uh over time?

Yeah, thanks for the question. So I'll say that indeed one of the scenarios that we are covering already today uh we are doing things as understanding and monitoring AI network systems in order to see if they go out of bounds but our view is that something deeper is going on here and that the entire infrastructure will need to be replaced. I'll give a concrete example around that. So you know just like anomaly detection that's a huge part of the security stack right now right but how does it work? you have a baseline and you're seeing whether a model is or just like whatever you're trying to monitor is doing something which is outside of what you would expect as the normal behavior. But if you don't know how an attack is going to look like, you don't have a proper baseline. And as an outcome of that, our view is that the first order thing to do is to create a strong research infrastructure that would allow you to essentially figure out and map the novel attacks that are unique to AI. see what are the gaps in the current security stack and start to fill them in from where and just like build a new platform that's going to be the platform to secure the agents of the future. So our hope and our ambitions are high. We believe that there is a place to create a huge company that is new around security very much like you know we started as part of the transition to the cloud and you know checkpoint started early on just like a few decades back when people started to implement and it's like networks in enterprises and usually when infrastructure is changing you have a just like a window of opportunity to create the company that's is going to be able to create a platform to secure that infrastructure end to end and that's our goal and that's what we want to do around AI Do you think it's more are are you more worried about the like rogue AI, the the AI that just randomly decides independently to try and break out of its environment or more bad actors thinking that if they can get an LLM to do something inside of an OpenAI environment or inside of a Google DeepMind environment that they can extract some sort of value like who is inciting the attack?

So ultimately it's both. I think in the near term it's the latter. So it's much more likely that we'll need some human interaction in order to elicit AI capabilities to push them into more dangerous scenarios. And I'm much more concerned right now about you know it's like tell organizations getting access to advanced AI system that are already you know if you look at like the system cards of open AI and on topic they put the capabilities of around bio and chemical so models being able to just like actually help in order to produce them at higher risk levels over time which makes me personally just like you know concerned about what happens if some bad actors are going to have access. That's also true on the commercial side that the more that we delegate to AI, if malicious actors are going to gain access, there is potentially going to be a whole new wave of viruses. And I think the near future is AI augmenting attackers and being used as part of, you know, just like the attack surface.

Yeah.

Over a longer horizon of time, we'll need to also make sure how we take care of just like rogue actions that are done by the AI itself.

Sure. Well, good luck to you. Thank you so much for hopping on the show. We will come come back on. I I expect in the next at least in the you know the next two years probably the next year there's going to be some type of event and we're going to think we got to call Dan to break this down. But hopefully you prevent it before it ever happens. Yeah. So

it will be it will be my pleasure both to prevent it but also to come back on the show.

Thank you so much.

Let's do it.

Cheers.

Byebye.

Jensen Wong is a huge Nano Banana fan. Love to see that. And Sar Pachai quote posts him and says, "Mine, too. I uh it made his day." They're both very happy. Also, Joe Gibbia is uh is shouting out Breathe Realm.

I threw this in here. Uh new air freshener company. The team when we moved into the studio bought a bunch of air fresheners. They said, "Throw those away." They release a bunch of, you know, toxic chemicals in air that you then breathe that then go throughout your body. Uh Sarah uh my wife actually invested in this company

years ago. So it took a while to get out to launch. Uh but cool to see

the new standard of air care. Go check it out. Saffron Citrus and Verbina Santal.

Sounds delightful. Uh well next up we