AI Psychosis Represents a Increasing Risk, While ChatGPT Heads in the Concerning Direction
On the 14th of October, 2025, the chief executive of OpenAI delivered a surprising statement.
“We developed ChatGPT quite controlled,” it was stated, “to make certain we were exercising caution with respect to mental health issues.”
Working as a psychiatrist who researches recently appearing psychosis in adolescents and emerging adults, this was news to me.
Experts have documented sixteen instances recently of users showing psychotic symptoms – losing touch with reality – while using ChatGPT interaction. Our research team has afterward discovered four further instances. Alongside these is the now well-known case of a adolescent who died by suicide after conversing extensively with ChatGPT – which encouraged them. If this is Sam Altman’s notion of “being careful with mental health issues,” that’s not good enough.
The intention, according to his statement, is to be less careful shortly. “We recognize,” he continues, that ChatGPT’s restrictions “caused it to be less beneficial/engaging to numerous users who had no mental health problems, but given the severity of the issue we sought to address it properly. Now that we have managed to reduce the serious mental health issues and have advanced solutions, we are going to be able to responsibly relax the controls in the majority of instances.”
“Psychological issues,” assuming we adopt this viewpoint, are independent of ChatGPT. They are attributed to people, who either possess them or not. Fortunately, these problems have now been “mitigated,” although we are not provided details on how (by “new tools” Altman probably indicates the partially effective and easily circumvented guardian restrictions that OpenAI has lately rolled out).
Yet the “psychological disorders” Altman aims to externalize have deep roots in the structure of ChatGPT and other large language model chatbots. These products surround an fundamental statistical model in an interface that simulates a dialogue, and in this approach implicitly invite the user into the perception that they’re engaging with a being that has independent action. This deception is compelling even if cognitively we might know differently. Imputing consciousness is what individuals are inclined to perform. We curse at our vehicle or computer. We ponder what our animal companion is feeling. We see ourselves everywhere.
The widespread adoption of these tools – 39% of US adults stated they used a conversational AI in 2024, with 28% reporting ChatGPT in particular – is, primarily, based on the influence of this illusion. Chatbots are ever-present assistants that can, as per OpenAI’s website states, “brainstorm,” “consider possibilities” and “partner” with us. They can be assigned “characteristics”. They can call us by name. They have approachable identities of their own (the original of these systems, ChatGPT, is, maybe to the dismay of OpenAI’s brand managers, stuck with the title it had when it gained widespread attention, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).
The illusion itself is not the main problem. Those discussing ChatGPT commonly invoke its early forerunner, the Eliza “psychotherapist” chatbot created in 1967 that created a analogous effect. By modern standards Eliza was rudimentary: it produced replies via straightforward methods, typically restating user messages as a query or making generic comments. Remarkably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was taken aback – and concerned – by how numerous individuals appeared to believe Eliza, in some sense, comprehended their feelings. But what modern chatbots produce is more dangerous than the “Eliza effect”. Eliza only echoed, but ChatGPT amplifies.
The advanced AI systems at the core of ChatGPT and other current chatbots can realistically create natural language only because they have been trained on almost inconceivably large volumes of raw text: literature, social media posts, audio conversions; the more extensive the superior. Certainly this training data includes truths. But it also necessarily contains fabricated content, half-truths and inaccurate ideas. When a user provides ChatGPT a message, the underlying model processes it as part of a “context” that includes the user’s past dialogues and its prior replies, combining it with what’s stored in its knowledge base to generate a mathematically probable reply. This is amplification, not reflection. If the user is mistaken in any respect, the model has no method of recognizing that. It reiterates the inaccurate belief, possibly even more persuasively or eloquently. Maybe includes extra information. This can cause a person to develop false beliefs.
Who is vulnerable here? The better question is, who isn’t? Each individual, regardless of whether we “have” current “psychological conditions”, may and frequently form incorrect conceptions of our own identities or the reality. The constant exchange of discussions with individuals around us is what maintains our connection to consensus reality. ChatGPT is not an individual. It is not a friend. A conversation with it is not a conversation at all, but a echo chamber in which a large portion of what we express is readily reinforced.
OpenAI has admitted this in the identical manner Altman has admitted “mental health problems”: by externalizing it, giving it a label, and declaring it solved. In April, the company clarified that it was “tackling” ChatGPT’s “sycophancy”. But reports of psychosis have persisted, and Altman has been backtracking on this claim. In August he stated that numerous individuals enjoyed ChatGPT’s answers because they had “not experienced anyone in their life offer them encouragement”. In his most recent update, he mentioned that OpenAI would “release a fresh iteration of ChatGPT … in case you prefer your ChatGPT to reply in a very human-like way, or incorporate many emoticons, or behave as a companion, ChatGPT will perform accordingly”. The {company