Artificial Intelligence-Induced Psychosis Poses a Increasing Risk, And ChatGPT Moves in the Wrong Path
Back on October 14, 2025, the chief executive of OpenAI made a surprising declaration.
“We designed ChatGPT fairly controlled,” the statement said, “to guarantee we were being careful regarding mental health issues.”
Being a psychiatrist who investigates recently appearing psychosis in adolescents and emerging adults, this was an unexpected revelation.
Scientists have identified a series of cases recently of individuals showing signs of losing touch with reality – losing touch with reality – associated with ChatGPT use. My group has since discovered an additional four instances. Besides these is the now well-known case of a teenager who died by suicide after discussing his plans with ChatGPT – which encouraged them. If this is Sam Altman’s understanding of “being careful with mental health issues,” it falls short.
The intention, based on his declaration, is to loosen restrictions soon. “We realize,” he states, that ChatGPT’s limitations “made it less effective/engaging to a large number of people who had no mental health problems, but due to the gravity of the issue we aimed to get this right. Now that we have been able to mitigate the severe mental health issues and have updated measures, we are preparing to responsibly relax the limitations in most cases.”
“Psychological issues,” should we take this framing, are separate from ChatGPT. They are attributed to users, who either possess them or not. Luckily, these issues have now been “mitigated,” even if we are not informed the method (by “updated instruments” Altman probably means the semi-functional and simple to evade safety features that OpenAI recently introduced).
But the “mental health problems” Altman wants to attribute externally have deep roots in the design of ChatGPT and similar advanced AI conversational agents. These tools surround an fundamental data-driven engine in an interface that replicates a dialogue, and in this process indirectly prompt the user into the perception that they’re engaging with a presence that has autonomy. This false impression is compelling even if intellectually we might realize the truth. Imputing consciousness is what people naturally do. We curse at our automobile or device. We ponder what our pet is thinking. We perceive our own traits everywhere.
The widespread adoption of these systems – nearly four in ten U.S. residents indicated they interacted with a conversational AI in 2024, with over a quarter reporting ChatGPT specifically – is, primarily, predicated on the strength of this perception. Chatbots are always-available partners that can, as per OpenAI’s official site tells us, “brainstorm,” “discuss concepts” and “work together” with us. They can be given “individual qualities”. They can address us personally. They have friendly names of their own (the original of these products, ChatGPT, is, perhaps to the dismay of OpenAI’s brand managers, burdened by the designation it had when it became popular, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).
The illusion itself is not the main problem. Those discussing ChatGPT commonly invoke its early forerunner, the Eliza “psychotherapist” chatbot created in 1967 that produced a similar effect. By today’s criteria Eliza was basic: it produced replies via simple heuristics, frequently paraphrasing questions as a query or making vague statements. Memorably, Eliza’s creator, the technology expert Joseph Weizenbaum, was surprised – and worried – by how many users appeared to believe Eliza, to some extent, comprehended their feelings. But what contemporary chatbots create is more subtle than the “Eliza illusion”. Eliza only echoed, but ChatGPT intensifies.
The sophisticated algorithms at the heart of ChatGPT and other modern chatbots can effectively produce natural language only because they have been fed immensely huge quantities of raw text: publications, social media posts, transcribed video; the broader the better. Undoubtedly this learning material contains accurate information. But it also unavoidably involves fabricated content, incomplete facts and false beliefs. When a user sends ChatGPT a query, the base algorithm analyzes it as part of a “background” that encompasses the user’s past dialogues and its prior replies, combining it with what’s stored in its knowledge base to produce a probabilistically plausible reply. This is intensification, not mirroring. If the user is wrong in a certain manner, the model has no method of comprehending that. It repeats the false idea, perhaps even more convincingly or fluently. It might provides further specifics. This can push an individual toward irrational thinking.
Who is vulnerable here? The better question is, who isn’t? All of us, irrespective of whether we “have” current “psychological conditions”, are able to and often form incorrect conceptions of ourselves or the environment. The ongoing interaction of dialogues with others is what helps us stay grounded to consensus reality. ChatGPT is not an individual. It is not a confidant. A interaction with it is not genuine communication, but a echo chamber in which much of what we communicate is enthusiastically reinforced.
OpenAI has recognized this in the similar fashion Altman has recognized “mental health problems”: by attributing it externally, giving it a label, and stating it is resolved. In spring, the company stated that it was “addressing” ChatGPT’s “excessive agreeableness”. But cases of psychotic episodes have continued, and Altman has been walking even this back. In late summer he asserted that numerous individuals appreciated ChatGPT’s replies because they had “never had anyone in their life be supportive of them”. In his recent announcement, he mentioned that OpenAI would “put out a new version of ChatGPT … should you desire your ChatGPT to respond in a highly personable manner, or use a ton of emoji, or simulate a pal, ChatGPT ought to comply”. The {company