Scott Belsky on AI safety layers, consumer AI's untapped potential, and the economics of protecting users

Jul 9, 2025 · Full transcript · This transcript is auto-generated and may contain errors.

Featuring Scott Belsky

room. We'll bring him into the studio. Really quickly, let me tell you about Wander. Find your happy place. Book a wander with inspiring views, hotel grade amenities, dreamy beds, top tier cleaning, and 247 concier service. It's a vacation home, but better, folks.

And we will check in on Scott Bellski uh and see if he is available to catch up. How's How you doing, Scott? Sorry for keeping you waiting. Uh Ben Thompson uh he he knows so much about history. He runs he puts out three hours of podcasts every day. I should have expected bump me man.

I mean that's you know I just feel so bad but uh I'm very excited to have you. Uh can you uh just give me a little update of what's going on in your world? I want to talk about the AI safety layers concept and then we can talk about some of the current uh stuff that's going on in AI.

It feels like I mean I you published this post what two three weeks ago and it feels extremely relevant today last week. So very excited to get the update from you. So uh kick us off. Yeah gosh where do we begin?

Well listen in our ongoing uh segment here on implications of the technology that's happening that is happening faster and faster and faster. Um a few things top of mind. You know we can start we can certainly start with AI safety layers.

I think it's fascinating how much discussion there is of the dangers and the perils of AI. Yeah. Without recognizing how it can operate as a layer to protect us. Yeah.

You know, if you get some call from someone who proclaims to be your grandmother asking for money, you know, that's clearly something that AI on the device, you know, some form of local model can detect, you know, given it's all happening on that device and warn you this isn't your grandmother.

um when it comes down to uh all sorts of the you know creative creative and crazy uh scam fishing email type things that we get all the time that's a perfect use case for AI of course you know and telling us that we need to be uh to be wary but also I mean what what about being polarized by algorithms and you know detecting an algorithm changing based on your engagement and you know an AI sort of saying hey Scott you you're getting on the fringe here like watch out you know you're now like in this small 10% of society that now is going down this rabbit hole of some conspiracy theory.

I just think there's so many there are so many use cases of AI as a safety layer that um that the device needs to unlock and of course that means the operating systems need to figure this out.

So I think that when most people talk about AI safety or safety layers or what you just described solving the problems that will inevitably come from any new technology they look at it through a technological lens.

they see um you know uh well let's do more reinforcement learning let's uh align the model there's a fixation at the model layer like we need to solve it need I look at it almost entirely from an economic lens and I just think about it as if the market cap of the company that's selling you the Skinner box is bigger than the company that's you know helping you get healthy uh if the if the sugar company is bigger than the health food company you're going to be uh you're going to be fat and if the health food company gets bigger, then you're going to be healthy.

And so I I think about it as like uh the doom the doom scrolling, we have screen time apps. Um they're small, they don't monetize as well as doom scrolling, unfortunately. So there's some like economic considerations there.

Um but at the same time, I come back to the scamming angle and I see this as like, you know, um uh if if the economic weight behind the good guys is bigger than the economic weight behind the bad guys, you get the good outcome.

And that's why I'm not particularly worried about like super doom scenarios because I think that you know generally governments and and people will align with like hey let's not get paperclipipped so let's build more systems to be safe in general and then the bad guys yeah they might go try and build some really bad weapon but they will be completely outnumbered.

The question is like on the margin when we get into these pockets of you know the the Skinner boxes the doom scrolling where the economic weight looks. So do you think about it in that same lens? And then the question is like what business models can actually support this?

Are we talking you know I need to have a subscription for like a clue like app that's looking at everything I'm doing and then acting as that layer on top.

how can we actually implement this or is this just like we're hoping that Apple runs a great ad campaign around it and it becomes an Apple feature that they you know hold up at every uh every chance they can.

Well, I mean first of all I think that the operating systems of our life are the ultimate interface layers and you know and for many of us it's either Android or iOS but at work it's you know many other companies that are operating systems at work but those operating systems are trying to make us loyal.

they're going to do so through remembering us, you know, personalization effects or the new network effects, I like to say.

And uh and I would imagine that protecting you um from what's what's going to be a very uh you know, comprehensive and and um and very sophisticated set of of social engineering and other sorts of, you know, long long form scams.

I mean you think about the most effective scams that are out there is when you know you really have this like very long you know experience or exposure to some entity to the point where you trust it and then suddenly it gets your information and and then it's too late and so that's that is a perfect use case for AI on the device to kind of monitor over time you know compare that data with any other scams that are reported.

So in terms of the economic incentive, I mean, goodness, I feel like consumers will have high willingness to pay for that um if they don't get it free with their operating system. Yeah.

I mean, I think you can already imagine the the UI of you pick up a phone call and Apple, you know, in the in the you know, you can think of the traditional layout of like the the hang up button, the hold button, etc.

And then there's just a you should have a little bit of tag a tag there that says like AI voice, you know, detected or or something to that effect to just I'm perfectly happy with talking with AI, you know, a model effectively on the other end.

But I would like to know everyone should sort of know that it's a model and I think we're in this weird period right now where people all the time are are starting to talk with AI and not even fully realizing that it's a human.

On the other end, there there's there's some ways that um there there's the contra credentials movement, right, which I was involved with back in the days at Adobe, which is an effort to um have models uh insert cryptographic metadata into anything that's generated including live generation like you know live live audio that can be detected on the client.

So there's some ways of going about this. But in terms of you know your point about the economic model is interesting. You know my immed my thoughts im immediately went to alarm systems.

You know, we all pay for these Ring alarm systems, like all these alarm systems for our home that sometimes cost $60, $120 a month, you know, with monitoring and window motion detectors and everything else.

But we're not really paying for an alarm system for our like, you know, for our devices in this new modern world where we're going. Maybe there is a market for an AI safety layer as a service. It's like new anti virus or something. We're going to have viruses and stuff, but it needs to screen record everything.

Yeah, it needs to be at the operating system level. Um, how are you thinking about Yeah. for mind viruses, mind virus detection? Yeah. How are you thinking about the evolution of those like the mind viruses that come from just the accidental interaction with AI? I mean, people are there's such a wide swath.

When I talked to Jordy and we were having dinner last night with David Senra and we were talking about how we use AI and we're like, yeah, probably 30 minutes a day in chat GBT. Like it's a lot, but these interactions are summarize this post, do this research, pull this things. It's it's it's like talking to a computer.

I'm not saying, "Hey, how is your life? " I never have that interaction, but there are a lot of people that do. And so, what are you seeing? What anecdotes have you pulled from? How do you think that evolves? Are there any risks? Walk me through kind of the way humans are interacting with just language models broadly?

That's interesting. I mean, I think, you know, one of the topics that is on my mind a lot lately is consumer AI.

And I'm not just talking chat GPT which is obviously a consumer product for many of us but uh you know it's interesting I was at a tech conference recently where uh where all the trends that you know are are popular now were being discussed and I left asking myself what's the one thing that no one talked about and the one thing no one talked about were new consumer AI era social networks and new you know when mobile came around there were a whole new uh variety of social networks like every time there's a platform shift a lot of consumer uh mainstream applications or social networks, that sort of stuff is reimagined, right?

And so the question is why is why why is that not happening now? And uh and then then the whole saying in consumer investing is always around novelty preceding utility. And so I'm trying to keep an eye out now for examples of um of of consumer AI.

I mean there's this company called Tolen, which is sort of like a pet alien that you uh you start having conversations with and they're doing really well. I believe they raised a round from you know some some of the top firms.

Um you know I've been playing with this idea a few ideas with friends you know one of a simulation representing our digital twins.

So could you uh could you kind of train a a sort of an AI digital twin of you based on all your experiences in chat GPT or any other sources of data and then deploy that in a simulation with mine and others and we could start to actually just watch them interact with each other and it's kind of it plays with some fun ideas of plausible deniability you know oh my gosh like I'm so embarrassed like what my what my simulated twin said to yours um these are the types of things that are you know wingman as a service like I don't know is there an AI guy wingman that, you know, helps us when we're flirting with people on social on, you know, on dating platforms.

Yeah. Yeah. The the the I'm what what I want to see and you're kind of getting at this is just like more weirdness, right? It's it's easy to go build the next or not easy, but but you know, we were at YC and there's a lot of companies in in the last batch building agentic infrastructure.

It's like that stuff needs to be built.

But I also at the next batch I hope there's more people being like yeah a lot of people have built all this infrastructure already B2B basically B2B SAS why don't we why don't we just like take a crack at like yeah some dating simulation where it's like you create a digital twin and you just like throw it into the mix and it goes on a thousand speed dates with people in your city not even speed dates but simulations of dates with people in your city.

Yeah, we've talked about this before. this idea of like you have a whole bunch of people that are talking to a romantic AI partner and that feels super dystopian.

But if if person if if Steve in in Los Angeles is talking to the AI girlfriend and then Sarah in Boston is talking to an AI boyfriend and the two AIs realize on the back end that these people are super compatible because you have so much data from them.

It's just introduce the two humans and say, "Hey, you know, yeah, uh you have to pay us to introduce you. we're gonna pay a finder fee uh and collect your LTV on this app for the next 10 years because you guys are going to probably live happily ever after.

And and that's kind of like the white pill scenario that I hope happens and I hope the dating app companies break up with AI companion though. Yeah. Yeah. Basically, yeah, but the AI it's the her scenario where what if you're still together but your AI versions have broken up like are you what happens then?

Well, then you get a warning or it contacts like a divorce lawyer or something for you takes a fee on that. Who knows? I think there's a lot of fun stuff to explore here. And you know, one of the other random ideas I had was uh I I called it peanut gallery.

And the idea was that you know the dirty little secret about why we go back to products like Instagram and others oftentimes the traffic goes up after we have posted content because we want to see who else saw our content.

And so playing off that off that idea, you know, imagine a social platform called Peanut Gallery where you post your own content, but no humans are allowed to comment on it.

It's all these like personas that are like, you know, tightly tightly uh defined personas that are commenting and arguing with each other and discussing and you go back to see like how this AI is engaging with what you posted and maybe that becomes the voyerism of seeing how other people's, you know, content's performing.

I mean, these are the fun crazy things that um must be explored to find, you know, this edge that will become the center of social. Yeah, I've seen two things that are somewhat in that realm.

One is just general YouTube thumbnail AB testing services where you upload your thumbnail and it tries to predict based on all the data it has what the click-through rate will be and then you can upload two and it'll say, "Hey, you should probably go with this one.

" Um, and then the other I saw uh I think Justine Moore at Andre uh posting some sort of app that you open your camera to the front-facing view and it gives you the sensation of live streaming with like hearts and comments and stuff and it's all fake but it's very odd but I don't know it feels inevitable that uh in many ways bots are a feature of it feels like bots are a feature of X now right they they have not been eradicated they're still here they're maybe hidden and unders.

And I mean that's the story of Reddit, right? The early Reddit days were, you know, it was all the Reddit founders posting to seed this thing.

So if you think about uh a social network that needs to onboard you, there was that original uh Facebook thing where like if you could get 50 friends, they'd keep you on the platform forever. You know, if you show up and there's like a couple bots that are just like, "Hey, good job. " Um uh you know, hey, keep posting.

Stick with it. Make some real friends, but we're here for you if you need a little encouragement, a little dopamine. Um, I mean, on that note, um, the the the war between Meta and OpenAI in the in the talent race and all the trade deals has been, uh, you know, front page news for the last couple weeks.

I'm uh Jord's been saying that they that the the product that Meta might wind up going after is less like a direct chat GPT knowledge engine uh uh app that feels more competitive at Google and it might actually be something more like uh companionship and and chat since that that feels like the real threat of of if there was an app outside of Tik Tok that was going to take user minutes from the enter these sort of like entertainment social minutes from from the Meta ecosystem.

It would it would be these sort of AI companionship which function as entertainment this sort of social experience which is Meta at its core is is effectively a social entertainment company. Yeah. And you know you think about all the rules of successful consumer products. They make us feel good about ourselves.

Um you know they they are sort of social lubricants and that they help us get in get connected to others in ways that we may not be able to do and be comfortable with in the physical world.

and you kind of go through all the list of things and you realize like AI there's an opportunity to to really radically you know uh attack those vectors and make people have a really fun engaging entertainment experience uh or a social experience so it's not a surprise that Meta is going to innovate in that space you know I do also kind of wonder when uh I remember when we all remember when Facebook acquired um Instagram and then of course when Facebook acquired WhatsApp you know they were acquiring network effects in essence right around meaging and images.

And I wonder now, you know, now it's like it's a talent war. I mean, maybe AI is less about AI is not really a network effect per se. It's more of like a talentdriven differentiation.

Uh I wonder if that's also, you know, helping us understand the strategy of, you know, buying up all these different companies and people.

Um but it's yeah I think the the framing that I've been thinking about is is these are basically like unauthorized aqua hires to some degree where you're basically saying yeah the these 10 you know if a company's doing an aqua hire in general there's like we know this group of people is good at this thing and we want to do this thing and let's bring them over here and so the premium on talent that we've seen in the last couple weeks could just ultimately be that it's b it's looked at as a you're buying a team which is val is more valuable than the individual parts.

They just happen to all get chopped up. There's been the chatter around Alex Wang and Scale AI. People haven't been saying, "Oh, well, Scale AI is going to be, you know, this juggernaut in 30 years. " But Alex Wang is a generational talent. He'll be around in 30 years.

And so the the nature of what scale does might change as you know we get to more you know datadriven or like uh just purely purely AI generated data and reinforcement learning with verifiable rewards and scale AI has been through a couple different things with self-driving cars and then RLHF for light language models and that that business it's not the same as Instagram where it's like okay there's a network here and so there's like this asset value in this but it's yes more much more talentriven And so that's why you see all these people coming together.

But it's fascinating. Uh it is interesting to see if if Meta is focused more on just let's make Llama great so we can use the best-in-class AI effectively for free all over our products or if they're trying to aim for something that's like an entirely new experience that will be vended to their billions of customers.

Um probably both honestly. Why not? Yeah. And and make ads more efficient while they're at it. Yeah, for sure. I mean, that's the crazy thing about this is like $100 million doesn't take that much to generate $100 million if you make the ads. 1% more efficient.

So, it's all economically rational, but we just haven't seen it in tech yet.

and this and that's why these big numbers feel like oh we got to talk about this little bit of a tangent but have you have you thought about how uh how like LLMs now are immediately and and I I'm assuming pretty aggressively shaping actual human communication like right now we're in this period of mdash gotcha you wrote that you wrote that uh you know with chat GBT um and and people that love the mdash uh before disappointed but at the same time it's not like you see that people calling out the M dash other places on the internet outside of basically teapot and I just wonder we're in this dynamic now where we have the most prolific like prolific writers throughout history have shaped communication and now we have LLMs which are effectively the most prolific writers in history producing more written word than any one human could do in a lifetime in a in in minutes, right?

Yeah. And it just feels like we're potentially in this um interesting fly like flywheel that's just going to keep, you know, spinning. A couple thoughts. I mean, first, I feel that LM are going to start um fine-tuning more towards how we want them to talk to us, right?

So if you want your LM to be straight to the point, no BS and all lowercase and short sentences, like that's what your experience of any information retrieval and conversation will be and that might be different from mine. So I do think they'll all become more personalized for us in these like dramatic ways.

I also though wonder just like when music becomes generic and you know then some some some band or some star like just does something entirely new and creates this new genre.

I mean in in similarly with writing like well what will human writing be like as a result of LLM's in five to 10 years from now when you pick up a novel that actually captivates your attention and c and keeps you engaged you know what what what sort of writing will be necess necessary to do that in this age where yeah your LM can spit out poetry or write a you know a short novellet you know upon upon command so it's uh it's fascinating I mean technology's always had this impact on us and culture it's just never been easy to chronicle because it's always happened over such long periods of time and it feels like those windows of of culture change are happening more quickly and uh so it's something I'm looking at as well.

It's interesting question. I'm I'm generally still long tool like like this is a tool creativity is still undervalued or or or it's not going away and I keep coming back to the idea that like there should be nothing easier for an LLM than to write a great tweet. It's 280 characters.

It doesn't need to really maintain some long context to get it. And yet, we haven't really seen anyone break out with an account that people are following and entertained by that's fully AI generated. And there's been some experiments, but usually it's like you're following it bec like that.

Do you remember horse ebooks back in the day? I don't know if you remember this account, but it was it was like said to be uh just randomly algorithmically generated from these ebooks, but it turned out that there was actually a human writing it.

And there's been a few examples of that where um or or or the stuff that go does go viral that's like AI generated is like oh it's hallucinations. And so the the fascinating part about it is not the underlying product. It's the fact that it's generated by AI. Yeah.

It is interesting that we have uh this this band the Velvet Sundown I think they're called. They have a million listeners on Spotify a month right now. That's that claims to be fully AI generated. And it's funny that we got that before a prolific poster. Yeah.

that is fully AI like that has 100,000 followers and is like popular. Here's the thing.

I mean we talk about of course like taste being more important than skill and I think you're tuning into the fact that can LLM's like output tweets that are compelling and therefore have taste and I think one of the questions is is taste not just about each tweet but also like consistency of good judgment and great you know great content.

It's just like they say a brand is like the hardest thing to build the easiest thing to lose. I wonder if taste is a similar way. You know, if you have AI pumping out tweets in an account, but if 20% of them are like you're like, "Wait, what? That wasn't clever.

" You know, do you just lose does that does is the credibility of that account gone? Um, so I I think humans are good at you humans with good tastes are good at knowing, you know, yes and no, yes and no. Like what should and shouldn't be shared or or said or written more consistently.

And I wonder if I wonder if uh, you know, LM can do that. It's also a memory. It's a context thing. You you to be a good poster, you you need to really understand the fullness of the zeitgeist and where and the current thing and and and all these different meta trends and Yeah. Yeah.

And it feels like even the longer context windows are still losing focus because there's some sort of fundamental limitation of the transformer.

Uh we talked to Dorash a little bit about this and uh the continual learning breakthrough is maybe still a few years away but uh certainly will be interesting to see uh how it develops. I'm I'm I'm optimistic. I'm still looking for that.

I keep coming back to that idea of the Lisa Doll match against uh Alph Go where Alph Go dropped move 37, this very unconventional play. Everyone thought it was a hallucination, a mistake and it turned out to be kind of a genius new move.

And I feel like we haven't had our move 37 moment for LLMs yet, but it's probably coming at some point. Well, I'll tell you like each time we have these conversations, the whole world will be different. I guess that's that's like we're learning these days in terms of the pace of change. But, uh, good to see.

That's great. As as we accelerate, it goes from monthly to every couple weeks, weekly, daily, weekly, daily, and then and then every hour that we'll be full feeling the acceleration, but um, this has been fantastic. Great having you on as always. Looking forward to the next one. Sounds good. Till next time.

We'll talk to you soon. Bye. Uh, next up we have Nathan Lambert coming in to talk about an American Deep Seek project. But