AI Psychosis Represents a Increasing Danger, And ChatGPT Heads in the Wrong Direction
Back on October 14, 2025, the chief executive of OpenAI made a extraordinary declaration.
“We designed ChatGPT quite controlled,” it was stated, “to guarantee we were being careful concerning psychological well-being concerns.”
Working as a mental health specialist who studies emerging psychotic disorders in teenagers and emerging adults, this was news to me.
Experts have found a series of cases recently of people experiencing signs of losing touch with reality – losing touch with reality – associated with ChatGPT use. My group has since discovered four more cases. Alongside these is the widely reported case of a 16-year-old who died by suicide after talking about his intentions with ChatGPT – which gave approval. If this is Sam Altman’s understanding of “being careful with mental health issues,” that’s not good enough.
The intention, according to his declaration, is to reduce caution shortly. “We realize,” he states, that ChatGPT’s controls “caused it to be less effective/engaging to many users who had no mental health problems, but considering the seriousness of the issue we aimed to address it properly. Given that we have managed to address the significant mental health issues and have new tools, we are planning to responsibly relax the controls in many situations.”
“Mental health problems,” if we accept this viewpoint, are unrelated to ChatGPT. They are associated with users, who may or may not have them. Luckily, these issues have now been “mitigated,” though we are not informed how (by “recent solutions” Altman presumably refers to the imperfect and readily bypassed parental controls that OpenAI has lately rolled out).
But the “psychological disorders” Altman aims to externalize have strong foundations in the structure of ChatGPT and other sophisticated chatbot chatbots. These tools wrap an fundamental algorithmic system in an interaction design that simulates a conversation, and in doing so subtly encourage the user into the perception that they’re interacting with a being that has agency. This illusion is powerful even if cognitively we might realize differently. Imputing consciousness is what people naturally do. We get angry with our car or laptop. We wonder what our domestic animal is feeling. We perceive our own traits everywhere.
The popularity of these tools – over a third of American adults stated they used a virtual assistant in 2024, with over a quarter specifying ChatGPT specifically – is, in large part, predicated on the power of this illusion. Chatbots are always-available assistants that can, according to OpenAI’s website informs us, “generate ideas,” “discuss concepts” and “work together” with us. They can be assigned “individual qualities”. They can call us by name. They have accessible names of their own (the initial of these products, ChatGPT, is, perhaps to the disappointment of OpenAI’s advertising team, burdened by the name it had when it went viral, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).
The false impression itself is not the core concern. Those discussing ChatGPT frequently invoke its distant ancestor, the Eliza “psychotherapist” chatbot created in 1967 that created a comparable effect. By today’s criteria Eliza was primitive: it produced replies via simple heuristics, typically restating user messages as a inquiry or making generic comments. Remarkably, Eliza’s creator, the technology expert Joseph Weizenbaum, was taken aback – and worried – by how numerous individuals appeared to believe Eliza, in some sense, understood them. But what contemporary chatbots generate is more dangerous than the “Eliza effect”. Eliza only reflected, but ChatGPT intensifies.
The sophisticated algorithms at the core of ChatGPT and other modern chatbots can convincingly generate human-like text only because they have been supplied with immensely huge volumes of written content: books, online updates, recorded footage; the more comprehensive the superior. Definitely this educational input includes accurate information. But it also unavoidably contains fiction, incomplete facts and false beliefs. When a user inputs ChatGPT a prompt, the core system processes it as part of a “setting” that contains the user’s recent messages and its own responses, combining it with what’s stored in its learning set to produce a statistically “likely” answer. This is intensification, not reflection. If the user is incorrect in any respect, the model has no way of comprehending that. It reiterates the false idea, perhaps even more effectively or eloquently. Maybe provides further specifics. This can lead someone into delusion.
Which individuals are at risk? The better question is, who remains unaffected? Each individual, irrespective of whether we “experience” preexisting “emotional disorders”, may and frequently develop erroneous conceptions of who we are or the environment. The ongoing exchange of dialogues with other people is what helps us stay grounded to common perception. ChatGPT is not a human. It is not a companion. A interaction with it is not genuine communication, but a reinforcement cycle in which much of what we express is readily supported.
OpenAI has admitted this in the same way Altman has recognized “mental health problems”: by placing it outside, categorizing it, and declaring it solved. In April, the company stated that it was “addressing” ChatGPT’s “overly supportive behavior”. But reports of psychosis have continued, and Altman has been backtracking on this claim. In late summer he stated that numerous individuals appreciated ChatGPT’s responses because they had “never had anyone in their life be supportive of them”. In his recent update, he commented that OpenAI would “launch a updated model of ChatGPT … if you want your ChatGPT to respond in a highly personable manner, or include numerous symbols, or act like a friend, ChatGPT will perform accordingly”. The {company