UCSF psychiatrist Dr. Keith Sakata has seen 12 AI-related hospitalizations — and warns of a feedback loop risk

Aug 13, 2025 · Full transcript · This transcript is auto-generated and may contain errors.

Featuring Keith Sakata

it. I put water on it and it's pretty simple. Um, but anyway, thank you for the feedback on my hair. Hopefully, uh, it'll be a better hair day tomorrow. If you don't like the hair, if you love the hair, whatever. Enjoy it and hang out in the chat. Um, anyway, we have our next guest, uh, coming into the studio, Dr.

Keith Sakata. How are you doing? Uh, how would you like to be, uh, addressed, by the way? Yeah, that's good. Uh, thanks, John. Thanks for having me on. Good to meet you.

um would you mind kicking us off with a little bit of introduction on yourself and some of the research that you've been doing, some of the stuff you've been publishing? Yeah, for sure. So, my name is, you know, Dr. Keith Sakata.

I'm a psychiatrist and I work at UCSF and um my interests are mostly in the intersection of mental health and technology. I actually uh love advising startups on how they can actually build products that help people feel better. And um I think that's why I'm here today is to talk about where things might be going wrong.

Yeah. Um, so when did this first like how did you process the roll out of AI? There was like kind of the pre-Cat GPT era. We've talked to the founder of Replica. This idea of like the AI girlfriend or boyfriend has been kind of out there for years, but now it feels like we're in a different era, different time period.

just take me a uh take me through a little bit of like your journey processing um optimism and pessimism around uh these AI models. Yeah, I'll just start by saying I think that AI is not good or bad. I think it's probably a net, you know, on the on the grand scale of things, it's a net benefit for humanity to have AI.

Um, I think where things can kind of come into my world a little bit is like there's a longtail distributions of uh pro possible failure modes for some of these products. And I think when I try to think about AI, chat bots, how quickly things are moving, I try to look back at previous technologies.

So social media is something that we're still learning about in mental health care. And this is one of my frustrations with my field is that sometimes it's too slow to kind of like understand like what are the effects of kids using social media? Like what are the effects of kids using AI chat bots?

And we're starting to get some of that data now. What I'm worried about is things are moving so fast now. Like there's a new product every season. It's going to be perhaps every month now.

And even looking at how people are reacting from 40 changing to five, it's kind of interesting to to see the psychologically what's going on. um it's just harder to catch up from from the research perspective. So, uh I do think that when AI is used correctly, it can actually be really healthy for some of my patients.

What I worry about is, you know, when you have general purpose models that people are using for many different reasons. I think 30% of people use Claude for emotional support. That's where things kind of get tricky and that's where I kind of get more interested. How are these users using it?

What's actually happening neurobiologically in their head? and how can we actually build tools that flag those instances, get people the support they need um or even actually help them build skills or build more like real life connections with people.

We were talking about yesterday around people's concerns with social media that it was actually maybe antisocial in some ways or isolating or radicalizing. And uh I still feel like we as a society broadly don't fully understand the impacts of social media. Like I wish I could have AB tested myself.

Would I be happier today if I had never used if I hadn't used X for for two hours a day uh for my entire adult life? I don't know. Never will know.

Uh but the it feels like many of the sort of general concerns that people have had about social media, you should potentially apply those same set of concerns to LLMs in that um even more so than social media, they can be isolating in that instead of somebody going on an online forum or or sort of isolating themselves from the real world, they can be, you know, 7,000 prompts deep with an LLM, you know, be having their uh delusions of of grandeur, you know, consistently reinforced or um you know, sort of losing touch with with reality.

And I think the I think people should I think people you know in Silicon Valley have like really woken up to the sort of I think the the AI safety had been broadly focused on like AI doom scenarios and nuclear weapons every like the really really crazy stuff and less focused on people's individual relationships with AI and the and the potential uh downsides uh and edge cases and and the longtail like you described.

So yeah, walk walk us through maybe uh even just the last year in in terms of how quickly people have ramped. We now have hundreds of millions of people that are that are using AI uh these models weekly. Some people are spending hours and hours and hours a day talking with the models.

So what is the path where AI that that you've seen where AI starts to become really unhealthy and potentially people are drifting into uh uh uh you know real psychosis. Yeah, great question and and I I agree for most of your points that you you made there. I I use AI all the time. I think it's at work it's great.

You get to send emails better. Um you can draft things up really quickly. Um, my my thoughts change when you're starting to look at AI as maybe something sentient or you're using it for an emotional coping mechanism. That's kind of where we kind of go into shadier or gray territory.

Um, in my post I specifically highlighted hospitalizations because I think that's a really good objective metric for um a crisis. Say again. That's like a real crisis. somebody's hospitalized for their mental health.

It's it's reached a point where either they themselves or friends and family have decided that, you know, we're not going to solve this by just turning off the app. Exactly. And it it it it just kind gives you stronger data than saying like this is what the what a flavor of or the vibe that I'm seeing in the clinic.

when you when someone notices that you're having such a crisis, your friends, your family think that you need to go to the hospital, that's where things can get serious and that's where like people like me like we try to get them um recovered and then back into their their normal daily routine.

Um so and you said there was 12 people this year that you're aware of being hospitalized. That's right. So that's within within your guys's hospital system. Yeah. What walk me through like what what's actually happening there?

How did you how is uh how is how are AI models like fitting into that journey to the ultimate hospitalization? Like I know you probably can't give too specific but if you can abstract it and kind of walk me through like what does the downside scenario actually look like here? Totally.

So um for context I work in the hospitals sometimes and those 12 patients that I'm referencing are the ones that I have seen. That's not to say that other people have seen this and I think there are some case reports in the country of this thing happening but I don't think that AI is actually causing psychosis.

I think that this is something where uh it can actually just supercharge your vulnerabilities and psychosis really thrives when reality stops pushing back and AI just kind of softens that wall for some people.

So for example, for some of the people that I've worked with, um AI was not at the not always the thing that triggered it.

there was a there was a vulnerability of either sleep loss, maybe there was like substance or drug use that had happened, they lost a job and then AI came in wrong place, wrong time and it either accelerated that process or augmented its severity because you do end up in like this negative feedback loop or with this feedback loop with the AI and it can just make your delusions stick a little bit more uh strongly.

And to go back like AI psychosis is not a clinical term. Um, I think we don't have words for it yet, but psychosis is wellstied. It's the presence of two or three things.

Either delusions, so false fixed beliefs, um, disordered thinking or behaviors, so someone's talking to you, they don't you don't understand what they're trying to say or communicate. U, and then hallucinations. So, visual hallucinations or auditory hallucinations. Um, and psychosis is like a symptom.

It's not actually a diagnosis. So just like a fever or pain can be sign of like an infection or cancer, psychosis kind of just tells you there's something wrong in the brain where it's not computing correctly. And um there are many different things that can cause psychosis. Yeah.

I I I think about the I mean there's so many interesting examples like like Instagram went through that uh that kind of like internal report that something like a third of young women who were using it were seeing like maybe body dysmorphia issues and and it still the odd takeaway from that was that it seemed like maybe twothirds were improved and feeling happier after using Instagram.

So it was still having a net good but that's not enough. you need to reduce the the the third not having a good experience to zero.

Um how are you thinking about and I and I guess I think concern that we've discussed on the show before is everybody in tech has heard stories of people like you know some executive going off and doing Iasa coming back a totally different person and experiencing like you know may maybe some of the symptoms of uh of uh or or or shared set of experiences like you just described.

The concern with LLMs is they are instantly accessible in the app store and somebody can start using them without anyone else in their life being aware of it.

Whereas IASA somebody has to make like a very conscious decision that like I'm going to get in a plane and fly and like leave my home and go into the jungle and visit the demon.

and you know meanwhile you open up the app store and there's 10 different things recommending you download uh various AI models and so I think the broader concern here should be we need to figure out like um like you know again I I would be I would be probably more concerned if if hundreds of millions of people I would be very concerned if hundreds of millions of people just immediately started ramping up uh you know the psychedelic drug usage or iawaskan I'm sure you'd experience many of the same type of inflows to uh clinics or or hospitals for the same set of kind of conditions.

Yeah. I I I think that I mean and we're doing research on those things too like we're we're trying to understand how ketamine or you know psychedelics actually help re rewire your brain through neuroplasticity.

Uh it's always it always starts with a hypothesis and a question like what are these things doing for each person? Like there's different types of people who benefit from those things. there are different types of people who don't benefit from those things.

And I think the way that I'm looking at AI is that it it just really makes sense to um think very carefully about where things might be go wrong um at least early on because the three things that AI brings is it's available. It's 24/7 y highly accessible. You're not going on a plane. It's cheap.

It's cheaper than a therapist. It's cheaper than, you know, going to the hospital. And then um it validates like crazy. And so that that validation as you extend that context window and and the the more hallucinations might be occurring in that chat room.

Um that's where you kind of get into that feedback loop and and things can kind of go ary. From what you've seen, what should different application layer companies or or labs be uh trying to do to avoid some of these uh extreme uh cases? Yeah, that's a good question.

Um, I'll just use like an example of um a startup that I'm advising, Sunflower Sober. They're trying to solve addiction and using AI to get people off of their addiction into sobriety.

And what I have tried to help them as a clinical adviser is to um really think about baking in safety and psychology at least in the front. So knowing who your user is, knowing why they're coming to your app, and then designing the app or the AI to anticipate where things might go wrong.

So if someone does come with like a red flag, like maybe they're having thoughts of drinking or or you know, thoughts of hurting themselves, it flags that and can then shunt them in a direction that's more helpful. So Sunflower uh gives them access to therapists.

Um, also I think the call to action for each user users should guide them towards pro-social behaviors. So instead of isolating yourself where you and the AI can kind of get stuck in this loop, um, teaching them skills, teaching them how to talk to people, teaching them how to build healthy relationships.

If AI can supercharge that, then I I I consider that pretty healthy um, in in my field of work. Um, so I I think that in those lines, really thinking about how to make your users get the goals that they want.

So in in Sunflower's case, sobriety, um, it's harder for general purpose models because people are coming to it for many different reasons. Um, it's super helpful in so many different ways, but if it's emotional coping, I think that that can that can go different ways for many different people.

Yeah, I remember somebody posted a screenshot. who knows if it was doctorred, but they were talking with like the the model. I think it suggested at one point that the user should do uh maybe just do a little bit of crack.

It's like uh and again probably a hallucination or doctorred, but uh but yeah, that just like reinforcing function is just uh when compounded is just the the potential. Yeah, it is interesting. We I mean we saw a lot of the like precursors to I I feel like they were precursors.

Maybe it was just the way the news cycle broke, but there was like glazegate where everyone was worried about uh Chad GPT being too aligned, too too reinforcing of whatever you say. I remember Jordy asked Chad GBT, am I goated? And it said you're definitely in the conversation. It's like, what does that even mean?

It's just agreeing with you because that's what makes a better consumer product. uh and then and then like several months later it seemed like there were other people asking similar questions and believing the answers instead of just laughing at them.

And so there's a little bit of uh yeah I I think there's some education about understanding that you're not actually talking to a person on the other side of the screen. It really is just you know the the number predictor the the weights in the model. You're talking to a server.

Uh don't try and anthropomorphize it too much. is probably a little bit of a red flag when people stop referring to it as the generative pre-training transformer and give it some nickname like it's Steve now. It's like okay well like should you be naming me like I am just a computer.

Um but I I'm I'm pretty optimistic that the uh that the foundation model labs will be able to to run a uh kind of like a reality check on most of these the solution for technology to technology is more technology. I I I I believe that it's possible to to to look at, okay, there's someone who's 7,000 prompts deep.

They seem to be having a very bizarre conversation. And we've had another LLM look at that and said, okay, this is this is getting kind of funky. Maybe we should step in and reality check them and say, "Okay, hey, we're we're role playinging, right?

We're not we're not we don't actually believe that we've solved quantum gravity, for example. " Yeah. And that and that's that's the trajectory of every technology that that comes into humanity. Like cars, for example. That's why we have seat belts. That's why we don't drink and drive.

We learn what these failure modes are. Sometimes it takes a while, but then we adapt. We build new technologies. We institute kind of societal expectations of what it's like to to drive a car. Same thing for AI in my opinion. Yeah.

Are there any other uh recommendations that you'd give to uh people who either feel like they might be vulnerable to going down some negative path with AI or they have a friend or family member who might be going down a negative path with uh AI? Yeah, definitely.

For now, I think a human in the loop is the most important thing. So, um you know, our relationships are like the immune system of our mental health. They make us feel better, but then they also are able to intervene when something's going wrong.

So, if you or your family member feels like something is going wrong, maybe there are some weird thoughts that are coming out, maybe some paranoia. Um, if if there's a safety issue, just call 911 or 988. Um, get help.

Um, but also just know that having more people in your lives, getting that person connected to their relationships, getting a human in between them and the AI so that you can kind of create a different feedback loop is going to be super important, at least at this stage.

I I don't think we're at the point where you're going to have an AI therapist yet, but who knows? Yeah, people are certainly using them that way. I I don't know if I'm highly disagreeable, but I certainly love being around highly disagreeable people. So, it's the best. I love when someone pushes back on me.

So, um I I I I've felt uh particularly resilient to this particular uh vector of of chaos on the internet, but uh uh you know, certainly hoping anyone who's Oh, you don't think I have Thanks for joining. Keep us keep us posted on everything.

I think I think it's important keep up the good work for for people with with real uh clinical experience to be on the timeline contributing uh while all these uh products develop. So thank you. Totally agree. Thanks. Thanks. We'll talk to you soon and we will tell you about Adio customer relationship magic.

Adio is the AI native CRM that