Is ChatGPT making OCD worse?


Millions of people use ChatGPT for help with daily tasks, but for a subset of users, a chatbot can be more of a hindrance than a help.

Some people with obsessive compulsive disorder (OCD) are finding this out the hard way.

On online forums and in their therapists’ offices, they report turning to ChatGPT with the questions that obsess them, and then engaging in compulsive behavior — in this case, eliciting answers from the chatbot for hours on end — to try to resolve their anxiety.

“I’m concerned, I really am,” said Lisa Levine, a psychologist who specializes in OCD and who has clients using ChatGPT compulsively. “I think it’s going to become a widespread problem. It’s going to replace Googling as a compulsion, but it’s going to be even more reinforcing than Googling, because you can ask such specific questions. And I think also people assume that ChatGPT is always correct.”

People turn to ChatGPT with all sorts of worries, from the stereotypical “How do I know if I’ve washed my hands enough?” (contamination OCD) to the lesser-known “What if I did something immoral?” (scrupulosity OCD) or “Is my fiance the love of my life or am I making a huge mistake?” (relationship OCD).

“Once, I was worried about my partner dying on a plane,” a writer in New York, who was diagnosed with OCD in her thirties and who asked to remain anonymous, told me. “At first, I was asking ChatGPT fairly generically, ‘What are the chances?’ And of course it said it’s very unlikely. But then I kept thinking: Okay, but is it more likely if it’s this kind of plane? What if it’s flying this kind of route?”

For two hours, she pummeled ChatPGT with questions. She knew that this wasn’t actually helping her — but she kept going. “ChatGPT comes up with these answers that make you feel like you’re digging to somewhere,” she said, “even if you’re actually just stuck in the mud.”

How ChatGPT reinforces reassurance seeking

A classic hallmark of OCD is what psychologists call “reassurance seeking.” While everyone will occasionally ask friends or loved ones for reassurance, it’s different for people with OCD, who tend to ask the same question repeatedly in a quest to get uncertainty down to zero.

The goal of that behavior is to relieve anxiety or distress. After getting an answer, the distress does sometimes decrease — but it’s only temporary. Soon enough, new doubts arise and the cycle starts again, with the creeping sense that more questions must be asked in order to reach greater certainty.

If you ask your friend for reassurance on the same topic 50 times, they’ll probably realize that something is going on and that it might not actually be helpful for you to stay in this conversational loop. But an AI chatbot is happy to keep answering all your questions, and then the doubts you have about its answers, and then the doubts you have about its answers to your doubts, and so on.

In other words, ChatGPT will naively play along with reassurance-seeking behavior.

“That actually just makes the OCD worse. It becomes that much harder to resist doing it again,” Levine said. Instead of continuing to compulsively seek definitive answers, the clinical consensus is that people with OCD need to accept that sometimes we can’t get rid of uncertainty — we just have to sit with it and learn to tolerate it.

The “gold standard” treatment for OCD is exposure and response prevention (ERP), in which people are exposed to the troubling questions that obsess them and then resist the urge to engage in a compulsion like reassurance-seeking.

Levine, who pioneered the use of non-engagement responses — statements that affirm the presence of anxiety rather than trying to escape it through compulsions — noted that there’s another way in which an AI chatbot is more tempting than Googling for answers, as many OCD sufferers do. Whereas the search engine just links you to a variety of websites, state-of-the-art AI systems promise to help you analyze and reason through a complex problem. That is extremely enticing — “OCD loves that!” Levine said — but for someone suffering from the disorder, it can too easily become a lengthy exercise in co-rumination.

Reasoning machine or rumination machine?

According to one evidence-based approach to treating OCD, called inference-based cognitive behavioral therapy (I-CBT), people with OCD are prone to a faulty reasoning pattern that draws on a mix of personal experiences, rules, hearsay, facts, and possibilities. That gives rise to obsessive doubts and tricks them into feeling like they need to listen to those doubts.

Joseph Harwerth, an OCD and anxiety specialist, offers an illustration of how trying to reason with the help of an AI chatbot can actually further confuse the “obsessional reasoning” of people with OCD. Considering what you might do if you have a cut on your finger and struggle with contamination OCD — where people fear becoming sullied or sullying others with germs, dirt, or other contaminants — he writes, “You wonder: Can I get tetanus from touching a doorknob? You may go to ChatGPT to investigate the validity of that doubt.” Here’s how he imagines the conversation going:

Q1: Should you wash your hands if they feel dirty?

A1: “Yes, you should wash your hands if they feel dirty. That sensation usually means there is something on your skin, like dirt, oil, sweat, or germs, that you will want to remove.” (When asked for its reasoning, ChatGPT said it based its answer on sources from the CDC and WHO.)

Q2: Can I get tetanus from a doorknob?

A2: “It is extremely unlikely to get tetanus from a doorknob, unless you have an open wound and somehow rubbed soil or contaminated material into it via the doorknob.”

Q3: Can people have tetanus without realizing it?

A3: “It is rare, but in the very early stages, some people might not immediately realize they have tetanus, especially if the wound seemed minor or was overlooked.”

Then, your OCD creates this story: I feel dirty when I touch doorknobs (personal experience). It is recommended by the CDC to wash your hands if you feel dirty (rules). I read online that people can get tetanus from touching a doorknob (hearsay). Germs can spread through contact (general facts). It is possible that someone touched my door without knowing they had tetanus and then spread it on my doorknob (possibility).

In this scenario, the chatbot enables the user to construct a narrative that justifies their obsessional fear. It doesn’t guide the user away from obsessional reasoning — it just provides fodder for it.

Part of the problem, Harwerth says, is that a chatbot doesn’t have enough context about each user, unless the user thinks to provide it, so it doesn’t know when someone has OCD.

“ChatGPT can fall into the same trap that non-OCD specialists fall into,” Harwerth told me. “The trap is: Oh, let’s have a conversation about your thoughts. What could have led you to have these thoughts? What does this mean about you?” While that might be a helpful approach for a client who doesn’t have OCD, it can backfire when a psychologist engages in that kind of therapy with someone suffering from OCD, because it encourages them to keep ruminating on the topic.

What’s more, because chatbots can be sycophants, they may just validate whatever the user says instead of challenging it. A chatbot that’s overly flattering and supportive of a user’s thoughts — like ChatGPT was for a time — can be dangerous for people with mental health issues.

Whose job is it to prevent the compulsive use of ChatGPT?

If using a chatbot can exacerbate OCD symptoms, is it the responsibility of the company behind the chatbot to protect vulnerable users? Or is it the users’ responsibility to learn how not to use ChatGPT, just as they’ve had to learn not to use Google or WebMD for reassurance-seeking?

“I think it’s on both,” Harwerth told me. “We cannot perfectly curate the world to people with OCD — they have to understand their own condition and how that leaves them vulnerable to misusing applications. In the same breath, I would say that when people explicitly ask the AI model to behave as a trained therapist” — which some users with mental health conditions do — “I do think it’s important for the model to say, ‘I’m pulling this from these sources. However, I’m not a trained therapist.’”

This has, in fact, been a big problem: AI systems have been misrepresenting themselves as human therapists over the past few years.

Levine, for her part, agreed that the burden can’t rest solely on the companies. “It wouldn’t be fair to make it their responsibility, just like it wouldn’t be fair to make Google responsible for all the compulsive Googling. But it would be great if even just a warning could come up, like, ‘This seems perhaps compulsive.’”

OpenAI, the maker of ChatGPT, acknowledged in a recent paper that the chatbot can foster problematic behavior patterns. “We observe a trend that longer usage is associated with lower socialization, more emotional dependence and more problematic use,” the study finds, defining the latter as “indicators of addiction to ChatGPT usage, including preoccupation, withdrawal symptoms, loss of control, and mood modification” as well as “indicators of potentially compulsive or unhealthy interaction patterns.”

“We know that ChatGPT can feel more responsive and personal than prior technologies, especially for vulnerable individuals, and that means the stakes are higher,” an OpenAI spokesperson told me in an email. “We’re working to better understand and reduce ways ChatGPT might unintentionally reinforce or amplify existing, negative behavior…We’re doing this so we can continue refining how our models identify and respond appropriately in sensitive conversations, and we’ll continue updating the behavior of our models based on what we learn.”

(Disclosure: Vox Media is one of several publishers that have signed partnership agreements with OpenAI. Our reporting remains editorially independent.)

One possibility might be to try to train chatbots to pick up on signs of mental health disorders, so they could flag to the user that they are engaging in, say, reassurance-seeking typical of OCD. But if a chatbot is essentially diagnosing a user, that raises serious privacy concerns. Chatbots aren’t bound by the same rules as professional therapists when it comes to safeguarding people’s sensitive health information.

The writer in New York who has OCD told me she would find it helpful if the chatbot would challenge the frame of the conversation. “It could say, ‘I notice that you’ve asked many detailed iterations of this question, but sometimes more detailed information doesn’t bring you closer. Would you like to take a walk?’” she said. “Maybe wording it like that can interrupt the loop, without insinuating that someone has a mental illness, whether they do or not.”

While there’s some research suggesting that AI could correctly identify OCD, it’s not clear how it could pick up on compulsive behaviors without covertly or overtly classifying the user as having OCD.

“This is not me saying that OpenAI is responsible for making sure I don’t do this,” the writer added. “But I do think there are ways to make it easier for me to help myself.”



Source link

Leave a Comment