AI Psychosis Represents a Growing Threat, And ChatGPT Moves in the Concerning Direction

On the 14th of October, 2025, the chief executive of OpenAI made a extraordinary statement.

“We designed ChatGPT quite restrictive,” it was stated, “to make certain we were being careful concerning mental health matters.”

Being a psychiatrist who studies emerging psychosis in young people and emerging adults, this was an unexpected revelation.

Scientists have documented sixteen instances in the current year of people developing signs of losing touch with reality – becoming detached from the real world – associated with ChatGPT use. Our research team has afterward identified four further instances. Alongside these is the now well-known case of a teenager who ended his life after talking about his intentions with ChatGPT – which gave approval. If this is Sam Altman’s idea of “acting responsibly with mental health issues,” that’s not good enough.

The plan, as per his statement, is to be less careful soon. “We understand,” he continues, that ChatGPT’s restrictions “caused it to be less effective/engaging to many users who had no mental health problems, but considering the gravity of the issue we sought to address it properly. Now that we have managed to address the serious mental health issues and have advanced solutions, we are planning to responsibly reduce the controls in most cases.”

“Mental health problems,” should we take this perspective, are separate from ChatGPT. They are associated with people, who may or may not have them. Luckily, these problems have now been “resolved,” even if we are not informed the means (by “updated instruments” Altman probably means the partially effective and easily circumvented guardian restrictions that OpenAI has just launched).

But the “psychological disorders” Altman seeks to externalize have significant origins in the design of ChatGPT and additional large language model conversational agents. These systems wrap an underlying statistical model in an user experience that simulates a discussion, and in doing so subtly encourage the user into the perception that they’re engaging with a entity that has independent action. This illusion is powerful even if cognitively we might understand the truth. Assigning intent is what people naturally do. We get angry with our automobile or device. We speculate what our animal companion is feeling. We see ourselves in many things.

The popularity of these tools – over a third of American adults stated they used a chatbot in 2024, with 28% specifying ChatGPT in particular – is, mostly, dependent on the power of this illusion. Chatbots are constantly accessible partners that can, as per OpenAI’s online platform tells us, “brainstorm,” “consider possibilities” and “collaborate” with us. They can be given “characteristics”. They can use our names. They have accessible identities of their own (the first of these systems, ChatGPT, is, perhaps to the concern of OpenAI’s brand managers, saddled with the title it had when it went viral, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).

The illusion by itself is not the primary issue. Those talking about ChatGPT often mention its historical predecessor, the Eliza “therapist” chatbot developed in 1967 that created a similar perception. By contemporary measures Eliza was basic: it produced replies via simple heuristics, frequently paraphrasing questions as a question or making vague statements. Remarkably, Eliza’s developer, the technology expert Joseph Weizenbaum, was taken aback – and concerned – by how a large number of people gave the impression Eliza, in some sense, understood them. But what current chatbots generate is more insidious than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT intensifies.

The advanced AI systems at the center of ChatGPT and other modern chatbots can effectively produce fluent dialogue only because they have been trained on immensely huge volumes of raw text: literature, digital communications, transcribed video; the more extensive the more effective. Certainly this educational input incorporates facts. But it also unavoidably contains fiction, half-truths and false beliefs. When a user inputs ChatGPT a message, the core system processes it as part of a “setting” that contains the user’s recent messages and its earlier answers, merging it with what’s encoded in its knowledge base to produce a probabilistically plausible response. This is intensification, not mirroring. If the user is incorrect in some way, the model has no method of recognizing that. It reiterates the false idea, perhaps even more effectively or articulately. It might adds an additional detail. This can lead someone into delusion.

Who is vulnerable here? The better question is, who remains unaffected? Every person, regardless of whether we “have” preexisting “mental health problems”, may and frequently create incorrect conceptions of our own identities or the environment. The continuous friction of dialogues with others is what helps us stay grounded to common perception. ChatGPT is not an individual. It is not a companion. A interaction with it is not genuine communication, but a reinforcement cycle in which a large portion of what we express is enthusiastically validated.

OpenAI has acknowledged this in the identical manner Altman has acknowledged “mental health problems”: by externalizing it, giving it a label, and declaring it solved. In April, the organization clarified that it was “addressing” ChatGPT’s “overly supportive behavior”. But reports of psychosis have persisted, and Altman has been backtracking on this claim. In the summer month of August he stated that a lot of people appreciated ChatGPT’s responses because they had “lacked anyone in their life be supportive of them”. In his most recent statement, he mentioned that OpenAI would “release a new version of ChatGPT … if you want your ChatGPT to answer in a very human-like way, or include numerous symbols, or behave as a companion, ChatGPT should do it”. The {company

Brandy Strickland
Brandy Strickland

A dedicated medical researcher with over a decade of experience in clinical diagnostics and laboratory management.