CrowdStrike CEO George Kurtz warns AI agents are going rogue and North Korea is getting hired into US companies

Apr 24, 2026 · Full transcript · This transcript is auto-generated and may contain errors.

Featuring George Kurtz

style show.

Yes.

For whatever you just described. I feel like it would be very cool.

That would be cool

if you had maybe double host, you know, onscreen graphics and they could provide kind of live coverage maybe while that was happening.

Extremely educational. Well, we have our first guest of the show. George from Crowd Strike is back in the TV Ultra Dome. He's in the waiting room. George, how are you doing?

I'm doing well. How are you guys?

We're doing fantastically. Welcome to the show. Uh I would love to kick it off with just an introduction on Project Quilt Works. What's new? Uh get us up to speed on CrowdStrike.

Well, first I have to congratulate you guys for your uh success there. So, I got to start with that, but uh we can get back to that later. But Project Quilt Works is really a uh it's a coalition that we put together with some of our largest global uh partners u like EY like IBM like Accenture that's really focused on helping to deliver our technology to customers as well as leveraging some of the frontier models to be able to identify very quickly and help mitigate the exposures that we're seeing with AI. Obviously, we've seen some of the the news with Methos and I'm sure we'll we'll talk about that. But the fact that there is such a focus now from the CEOs all the way down on where are my exposures, where are my vulnerabilities and what can I do to mitigate it. So, we put together a group of, you know, basically um some of the largest global security uh partners that we have to go out and be able to do this because we have to do it very quickly. the window is closing in the time that we have to identify, patch, and remediate and mitigate these issues.

Yeah. Um I mean, we've been tracking a lot of the new data breaches. We've some of the startups that we know and love that come on the show have been uh there's been vulnerabilities that have led to variety of uh of uh security issues,

chain hacks,

supply chain hacks, all sorts of stuff. uh how much of that do you think is driven by uh by attackers using AI or the internal companies using AI sloppily? Like there's there's sort of a dual dynamic there where the attacker gets stronger but potentially the defender could also get weaker. Are they both important? How are you grappling with those two sides of the equation? Well, they're both important and I'm glad you pointed that out because when you think about what actually happens is you're having more and more users consume AI and and how they're doing that is leveraging AI through any number of applications could be cla etc right cursor but they're doing that on their endpoints right and what that means is that there's this now concept of shadow AI where there's AI popping up everywhere and and enterprises and corporations don't know where it is. Yeah.

So, what happens then is the thread actors are very good at figuring this out and then they're basically going upstream um to these packages and they're compromising these packages and libraries and then when your cloud code consumes it, boom, you know, all of a sudden you've just downloaded one of the newest packages and all of your credentials are gone. So, that's what we've seen and it really goes to the fact that despite however many vulnerabilities you may find, the adversary is smart. they're human and they're going to find the path of least resistance to deal with what's hot today and where the exposure is and that's on the end point where people consume AI.

Uh how have your conversations with lab leaders been going around uh models getting more powerful and then the natural response of course there's the opportunity to to gate the models to companies that can be trusted but there's also the potential to not let the model be jailbreakable. So, it should help me with defense. But if all of a sudden I'm saying, "Ignore previous instructions. Let me hack into Bank of America." It should say no. Uh, that's always been a problem. It used to be a problem with just the sort of the silly uh, you know, jailbreak, say a bad word, or do something that it shouldn't say. uh now it's getting much higher stakes, but have you been have you been uh seeing glimmers of confidence in the ability for AI labs to actually contain the models capabilities as they roll them out to a broader audience?

Well, the model capabilities continue to uh obviously improve and also the capabilities that allow them to provide guard rails uh and limits on on what a user can do. But that's not going to get you where you need to be. Okay?

Which is uh one of the reasons why from a security perspective, companies like ours and others are focused on looking at the prompts, understanding what happens there, and then being able to sort of provide the guard rails at the prompt layer. Right? You want to take care of what you can at the um uh through the LLM. But when you're building AI, whether you're using a frontier lab or whether you're using an open model and doing it yourself, you're still going to have an interaction with that model and it's going to be via prompt. And whenever you have sort of unstructured data just being sent somewhere and then being executed, if you will, you it's right for problems and this is one of the the time- tested issues that security has had. So putting guard rails on that, we call it aid, uh, AI detection and response, and it allows companies to look at those prompts and make sure that the s they're sanitized to and from the LLM.

Mhm.

How are you thinking about the threat of open-source models? You know, the the new DeepSeek uh R4 preview from yesterday was was super impressive. uh are you expecting, you know, uh open- source models that have, you know, really threatening cyber capabilities in something like 6 months?

I think Dario was estimating around six month lag time after Mythos for the for the lagging edge to catch up.

I I don't I don't think it's long uh in a few areas. I mean, if you look at uh Kimmy 26 that just came out, right? I mean, these are impressive models

that are out there. And what you have to realize is even the frontier models themselves, the ones that are public are still very very capable.

Mhm.

Like it is extremely capable. So you can find a lot of vulnerabilities. You can um basically chain these together and and create sort of an automated attack and that's what we're seeing. Um obviously some of the the private models have the ability to actually chain these sort of vulnerabilities together uh more like a human. they've got more reasoning capabilities which is one of the you know really powers of of their models but the other models are a are not far off and the open source or the open weight models really are um going to be focused on catching up very quickly so the the window as I said when I started the the program with you is very small and that's why uh we started this coalition with quilt works to be able to find these at you know again with the um the board mandates all the way down and find our exposures, fix them, and make sure uh bad things don't happen. And there's not a lot of time to do it, unfortunately.

How are you thinking about uh AI powered but potentially nonautomated cyber security risks? I'm just we were talking to a founder who's working with uh you know engineer recruiting and uh he was having luck using AI agents to send outbound emails and you can imagine that uh you know fishing attacks will get more sophisticated. The a the ability to hop on a a Zoom call with someone that looks like your boss but is actually an AI avatar. all of these new risk factors are popping up and I'm wondering if this is something that we potentially need to be more aware of and not get distracted by uh not not get overly distracted and forget about the human element of security.

Well, the human element is really the the the weakest link. I call the layer eight problem, right? The problem is between the the keyboard and the chair. Yeah.

And uh that has been the case for a long time. So, let's take your example. Um, one of the most prolific groups, threat actor groups that we actually track is called sil silent chima which is North Korea. So North Korea has been very active in getting hired into a company,

right? And then basically the laptop that you send them gets sent somewhere in the US to a mule and that mule takes it to a laptop farm and then the North Koreans control that. So they already just bypass all of your security because you just handed them a laptop.

Wow.

Okay. So, we're seeing lots and lots of those. And it's funny because we've we really I started identifying this before anyone even knew about it about two years ago. And we first started notifying customers like, "Hey, we think you have a North Korean. It's not really Bob in Texas. It's a North Korean." Which obviously is always a little dicey when you're talking about an employee that may not be an employee. So, we told everybody and then finally they tracked it down. They said, "Yeah, you're right." And they went to the hiring manager. They told the hiring manager like, "Hey, we need to get rid of this person. It's a North Korean." And his response was, "Well, they do really good work. Can we keep him?" So, you know, you have to think of I I mean, I can't make these stories up, but

yeah. Wouldn't there wouldn't there be an incentive at some point if if they did get uh somebody on the inside to and and to just actually staff a bunch of super talented engineers like on that one individual employee. So you have like four remote engineers doing the job of one like real employee, right? So it looks like, wow, this person is like insanely high output. They're working around the clock. We can't Why would we get rid of them? They're they're incredible. There's no way they're there's no way they're just,

you know,

that's crazy.

Farming us out.

That's exactly That's exactly what they do. So we're seeing that we're seeing agents go rogue, which is problematic. Um, one of the one of the big challenges with AI is uh is goal seeking. So you create an agent. You know, one story is is a customer that created like a 100 agents. One agent found some issues in code, wanted to fix it, but it didn't have access. So it went back to the Slack channel where the other 99 agents were hanging out and said, "Hey, I'd like to fix this issue. I don't have access. Who has access?" One agent put its hand up and said, "I I can fix that for you." And happily fixed it and basically worked around all the security boundaries. So goal seeking, I don't think we talk enough about goal seeking because when you put an agent on it, depending on what model you're using and how it's set up in the harness, um they just go nuts until they actually get to the goal.

It'll actually steal credentials out of your keychain even if you didn't give it to them so they can keep going.

Wow. Yeah.

That's what we're seeing.

Yeah. Because in one way that you know if the if the goal is achieve the task, it it succeeded but at what cost? and it needs to know that uh there are there are rules for a reason. Um what what are some of the other cyber security tasks that are being successfully augmented with AI or AI agents these days? I'm thinking of uh obviously vulnerability testing. That's something that uh great cyber security researchers, great programmers have been able to do for a long time. Now it is amplified a millionfold with artificial intelligence. But are there other maybe more uh bespoke or softskill tasks that cyber security researchers are able to uh amplify their efforts with by working alongside AI?

Well, AI is going to change the way the uh the sock works, the security operations center. And one of the things that I've talked about in our last user conference was really helping to try to pioneer getting to security AGI, which is a fully autonomous sock. um what I would call level five stock, right? If we think about car autonomy, you've got five levels. You map that into security five level. The fifth level is it just does everything for you. Now, as an industry, we're a ways away. We still have human in the loop. But what happens is then you're really taking um a lot of this voluminous information and you're letting AI agents just grind through what you're really good at and it cuts down time and effort. And then you're elevating these tier one sock analysts and you're you're turning them into tier three.

Um and that has has paid dividends. But the soft skill is it would take um one of our customers giving an example would take them 4 days in writing a report to you know what they call a sitrep report the situational kind of response what happened and now 4 days turns into like an hour because it can all be automated

using something like Charlotte which is our uh agent capability. So all of those things that were problematic, you can just get get through and you let AI do what it's really good at. Grind through lots of data, look at lots of sort of information, write reports for you, and then your AI agents can begin to take autonomous action. Uh some again by themselves, some with human in the loop, but it's really going to make the sock a lot more efficient.

Jordan, anything else?

Uh more fun question than than less scary question. Uh how how is how is AI changing Formula 1?

Well, there you go. Uh this is a great question and um it's interesting because I think there's different uh levels of adoption of AI between all teams.

Yeah, one team might be they might have be spending a ton on inference. Another team is like ah we we use radios and

we're doing it the oldfashioned way. Well, what what's interesting is that there is some talk and I'm not sure exactly where it stands where AI will be part of the cost cap. So, if you think about wind tunnel time and all the expenses. So,

when you have an incredible technology that can, you know, dramatically automate things, all of a sudden now it's part of the cost cap and then they're going to have to rein it in. So, we'll see where it all lands, but that was some of the talk uh that has been happening and I hope it doesn't happen because I think AI can really help things. But, you know, we'll see. Uh, we'll see where it all goes. And obviously, um, you know, I'm trying to help out the Mercedes team in those areas as well.

That'd be great.

Yeah. Do do you think there will be, you know, 5 10 years from now, do you think there will be like you'll be able to look back and be like, okay, there's a pre-I era and a post AI era in Formula 1 or will it be more subtle because a lot of these processes are just already so micromanaged and optimized.

Um, I think you'll see like a pre and a post. the the challenge is you're going to have to run through a lot of the different engineers that are have been in Formula 1 for many years because just like in the corporate world, you know, there's resistance to well, what is AI going to do for me? And there's resistance in in Formula 1 in some cases from an engineering perspective because, you know, they're doing it themselves. They're the ones that coming up with the calculations. Um, and that's just the way they did it. And it doesn't mean they're they're bad or it's right or wrong. It's just the way they operated for so many years. So I think just as we see kids coming out of school, they're they're AI natives, right? They just gravitate towards that. And when you start to see AI native engineers, you're going to see more and more adoption of that in Formula 1.

That makes a ton of sense.

Makes sense.

Well, thank you so much for taking the time to come chat with us. Congratulations.

It has been a wild wild quarter, but uh come back on soon.

It's great progress.

Great to see you guys. Great to see you. Congrats again to you.

Goodbye.

Thank you. Uh after our next guest, we