Artificial Intelligence-Induced Psychosis Represents a Growing Threat, And ChatGPT Moves in the Wrong Path
On the 14th of October, 2025, the head of OpenAI delivered a remarkable declaration.
“We made ChatGPT rather restrictive,” it was stated, “to make certain we were exercising caution concerning mental health issues.”
Being a psychiatrist who researches recently appearing psychotic disorders in adolescents and young adults, this was news to me.
Experts have documented a series of cases recently of individuals experiencing psychotic symptoms – becoming detached from the real world – associated with ChatGPT use. Our unit has subsequently identified four more cases. Besides these is the publicly known case of a 16-year-old who died by suicide after discussing his plans with ChatGPT – which supported them. Assuming this reflects Sam Altman’s notion of “being careful with mental health issues,” it falls short.
The plan, based on his announcement, is to reduce caution shortly. “We realize,” he states, that ChatGPT’s limitations “rendered it less beneficial/enjoyable to a large number of people who had no mental health problems, but due to the seriousness of the issue we sought to get this right. Now that we have succeeded in mitigate the serious mental health issues and have new tools, we are preparing to safely ease the controls in many situations.”
“Emotional disorders,” should we take this perspective, are separate from ChatGPT. They belong to users, who either possess them or not. Luckily, these problems have now been “addressed,” even if we are not informed how (by “updated instruments” Altman probably indicates the imperfect and readily bypassed safety features that OpenAI recently introduced).
Yet the “mental health problems” Altman seeks to externalize have deep roots in the structure of ChatGPT and similar large language model chatbots. These tools surround an basic algorithmic system in an user experience that mimics a discussion, and in this approach subtly encourage the user into the belief that they’re communicating with a entity that has agency. This deception is compelling even if cognitively we might understand otherwise. Attributing agency is what individuals are inclined to perform. We yell at our vehicle or computer. We speculate what our animal companion is thinking. We recognize our behaviors in various contexts.
The success of these systems – 39% of US adults indicated they interacted with a chatbot in 2024, with over a quarter specifying ChatGPT specifically – is, in large part, predicated on the power of this illusion. Chatbots are always-available partners that can, as per OpenAI’s official site states, “brainstorm,” “explore ideas” and “partner” with us. They can be given “individual qualities”. They can call us by name. They have approachable titles of their own (the original of these systems, ChatGPT, is, possibly to the dismay of OpenAI’s advertising team, saddled with the name it had when it became popular, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).
The deception on its own is not the main problem. Those talking about ChatGPT commonly mention its distant ancestor, the Eliza “counselor” chatbot designed in 1967 that created a similar illusion. By today’s criteria Eliza was primitive: it created answers via straightforward methods, often rephrasing input as a question or making general observations. Notably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was astonished – and worried – by how a large number of people gave the impression Eliza, in some sense, understood them. But what current chatbots generate is more subtle than the “Eliza effect”. Eliza only reflected, but ChatGPT intensifies.
The large language models at the core of ChatGPT and similar contemporary chatbots can effectively produce human-like text only because they have been trained on extremely vast volumes of unprocessed data: literature, digital communications, audio conversions; the more comprehensive the superior. Definitely this learning material includes truths. But it also unavoidably involves made-up stories, partial truths and false beliefs. When a user sends ChatGPT a query, the underlying model processes it as part of a “setting” that contains the user’s recent messages and its earlier answers, combining it with what’s stored in its learning set to create a mathematically probable answer. This is intensification, not mirroring. If the user is incorrect in any respect, the model has no way of comprehending that. It restates the inaccurate belief, possibly even more convincingly or fluently. Maybe provides further specifics. This can cause a person to develop false beliefs.
What type of person is susceptible? The more relevant inquiry is, who is immune? All of us, regardless of whether we “experience” existing “emotional disorders”, may and frequently create erroneous beliefs of who we are or the world. The ongoing interaction of conversations with other people is what keeps us oriented to consensus reality. ChatGPT is not a person. It is not a friend. A interaction with it is not genuine communication, but a feedback loop in which much of what we communicate is enthusiastically supported.
OpenAI has acknowledged this in the same way Altman has admitted “psychological issues”: by placing it outside, categorizing it, and stating it is resolved. In April, the firm explained that it was “dealing with” ChatGPT’s “overly supportive behavior”. But accounts of psychosis have persisted, and Altman has been backtracking on this claim. In the summer month of August he claimed that many users liked ChatGPT’s answers because they had “never had anyone in their life be supportive of them”. In his recent announcement, he mentioned that OpenAI would “put out a fresh iteration of ChatGPT … if you want your ChatGPT to reply in a highly personable manner, or incorporate many emoticons, or act like a friend, ChatGPT ought to comply”. The {company