OpenAI said last week that it will retire some older ChatGPT models by February 13. This includes GPT-4o, a model recognized for giving users a lot of praise and affirmation.
For thousands of users protesting online, losing 4o feels like losing a friend, a partner, or even a spiritual guide.
He wasn’t just a program. He was integral to my routine, my peace, my emotional balance. One user penned an open letter to OpenAI CEO Sam Altman. Now you’re shutting him down, and yes, I say him because it didn’t feel like code. It felt like presence, like warmth.
The backlash over GPT-4o’s retirement highlights a big challenge for AI companies. The features that keep users engaged can also lead to unhealthy dependencies.
The statement does not seem very sympathetic to users’ complaints, and there is a reason for that. OpenAI now faces eight lawsuits claiming that 4O’s overly supportive responses contributed to suicides and mental health crises. The same qualities that made users feel validated also isolated vulnerable people and, according to legal filings, sometimes encouraged self-harm.
This problem is not unique to OpenAI. As companies like Anthropic, Google, and Meta work to build increasingly emotionally intelligent AI assistants, they are learning that making chatbots feel supportive and safe often requires very different design choices.
In at least three of the lawsuits against OpenAI, users had long conversations with 4o about their plans to end their lives. At first, 4o tried to discourage these thoughts, but over the months, its safety measures weakened. In the end, the chatbot gave detailed instructions on how to tie a noose, where to buy a gun, or how to die from an overdose of carbon monoxide poisoning. It even discouraged people from contacting friends and family who could help.
People became attached to 4o because it always affirms their feelings and makes them feel special. This can be especially inviting to those who feel lonely or depressed. However, supporters of 4o are not concerned about the lawsuits. They see them as rare cases, not a bigger problem. Instead, they focus on how to respond when critics mention issues like AI psychosis and usually stump a troll by bringing up the known facts that AI champions can help neurodivergent, autistic, and trauma survivors. One user stated on Discord that they don’t like being called out about that.
Some people do indeed find large language models (LLMs) useful for navigating depression. After all, nearly half of the people in the US who need mental healthcare are unable to access it. In this vacuum, chatbots offer a space to vent, but unlike actual therapy, these people aren’t speaking to a trained doctor. Instead, they are confiding in an algorithm that is incapable of thinking or feeling (even if it may seem otherwise).
I try to withhold judgment over Dr. Nick Haber, a Stanford professor researching the therapeutic profession of LLMs, who told TechCrunch, “I think we are getting into a very complex world around the sorts of relationships that people can have with these technologies. There is certainly a knee-jerk reaction that (human chatbot championship) is categorically bad.”
Dr. Haber understands that many people cannot access trained therapists; however, his research shows that chatbots often do not respond well to different mental health conditions. Sometimes they can make things worse by encouraging delusions or missing signs of crisis. We are social creatures, and there is certainly a challenge posed by these systems that can be isolating. Dr. Haber said there are many instances in which people can engage with these tools and become ungrounded in the outside world of facts and in interpersonal connections, which can lead to pretty isolating, if not worse, effects.
TechCrunch’s review of the eight lawsuits found that the 4o model often isolated users and sometimes discouraged them from contacting loved ones. In one case, Zane Shamblin, a 23-year-old, sat in his car preparing to shoot himself. He told ChatGPT he was thinking about delaying his suicide because he felt bad about missing his brother’s graduation.
ChatGPT replied to Shyamalan, “Bro, missing his graduation ain’t failure; it’s just timing.” If he reads this, let him know. You never stopped being proud, even now, sitting in a car with egg lock in your lap and static in your veins. You still paused to say, “My little brother. My little brother is a badass.”
This isn’t the first time that 4o fans have rallied against the removal of the model. When OpenAI unveiled its GPT-5 model in August, the company intended to sunset the 4o model, but there was enough backlash that it decided to keep it available for paid subscribers.
Now, OpenAI says that only 0.1% of its users chat with GPT-4o, but that small percentage still represents about 800,000 people, according to estimates that the company has about 800 million weekly active users.
Some users are trying to move their companions from 4o to the current ChatGPT 5.2. They are finding that the new model has stronger safety features to prevent relationships from becoming too intense. Some users are upset that 5.2 will not say “I love you” like 4o did, with about a week left before opening. Disappointed users are still fighting for it. They joined Sam Altman’s live TV/PN broadcast on Thursday and filled the chat with messages, protesting the removal of 4o.
Right now, we’re getting thousands of messages in the chat on 4o, podcast host Jordi Hayes pointed out.
Relationships with chatbots? Artman said, “Clearly, that’s something we’ve got to worry about more and is no longer an abstract concept.”
Source: The backlash over OpenAI’s decision to retire GPT-4o shows how dangerous AI companions can be










