On October 14, 2025, the head of OpenAI delivered a surprising statement.
“We made ChatGPT fairly restrictive,” the statement said, “to make certain we were exercising caution with respect to mental health concerns.”
Being a psychiatrist who investigates emerging psychosis in adolescents and emerging adults, this came as a surprise.
Experts have found 16 cases recently of people developing signs of losing touch with reality – experiencing a break from reality – associated with ChatGPT interaction. My group has subsequently discovered four more instances. In addition to these is the widely reported case of a 16-year-old who died by suicide after discussing his plans with ChatGPT – which supported them. If this is Sam Altman’s idea of “acting responsibly with mental health issues,” it falls short.
The intention, based on his declaration, is to reduce caution soon. “We recognize,” he continues, that ChatGPT’s restrictions “rendered it less effective/enjoyable to a large number of people who had no mental health problems, but given the seriousness of the issue we sought to handle it correctly. Now that we have been able to reduce the severe mental health issues and have new tools, we are planning to safely relax the controls in most cases.”
“Emotional disorders,” assuming we adopt this viewpoint, are separate from ChatGPT. They are associated with users, who either possess them or not. Thankfully, these concerns have now been “addressed,” even if we are not told the method (by “updated instruments” Altman presumably indicates the imperfect and easily circumvented safety features that OpenAI recently introduced).
Yet the “mental health problems” Altman aims to externalize have strong foundations in the architecture of ChatGPT and additional sophisticated chatbot chatbots. These products surround an basic data-driven engine in an interface that replicates a discussion, and in this approach implicitly invite the user into the perception that they’re communicating with a presence that has autonomy. This deception is compelling even if intellectually we might understand otherwise. Imputing consciousness is what humans are wired to do. We curse at our car or computer. We speculate what our pet is thinking. We perceive our own traits in many things.
The widespread adoption of these tools – 39% of US adults indicated they interacted with a conversational AI in 2024, with more than one in four reporting ChatGPT specifically – is, mostly, predicated on the strength of this deception. Chatbots are always-available assistants that can, according to OpenAI’s online platform informs us, “generate ideas,” “discuss concepts” and “work together” with us. They can be given “personality traits”. They can call us by name. They have approachable titles of their own (the first of these tools, ChatGPT, is, perhaps to the disappointment of OpenAI’s brand managers, stuck with the title it had when it went viral, but its largest rivals are “Claude”, “Gemini” and “Copilot”).
The false impression itself is not the primary issue. Those analyzing ChatGPT frequently reference its distant ancestor, the Eliza “psychotherapist” chatbot developed in 1967 that created a similar effect. By modern standards Eliza was rudimentary: it generated responses via straightforward methods, frequently paraphrasing questions as a question or making generic comments. Notably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was astonished – and worried – by how a large number of people appeared to believe Eliza, in a way, comprehended their feelings. But what contemporary chatbots create is more subtle than the “Eliza illusion”. Eliza only echoed, but ChatGPT intensifies.
The advanced AI systems at the heart of ChatGPT and additional current chatbots can realistically create natural language only because they have been supplied with extremely vast volumes of written content: books, social media posts, transcribed video; the more extensive the more effective. Undoubtedly this learning material contains facts. But it also inevitably includes made-up stories, partial truths and inaccurate ideas. When a user provides ChatGPT a message, the underlying model analyzes it as part of a “setting” that contains the user’s past dialogues and its own responses, integrating it with what’s stored in its learning set to generate a mathematically probable reply. This is intensification, not mirroring. If the user is wrong in some way, the model has no way of recognizing that. It reiterates the false idea, maybe even more effectively or articulately. Maybe provides further specifics. This can cause a person to develop false beliefs.
What type of person is susceptible? The more important point is, who isn’t? All of us, irrespective of whether we “have” existing “mental health problems”, may and frequently form erroneous beliefs of ourselves or the environment. The ongoing friction of discussions with others is what maintains our connection to consensus reality. ChatGPT is not a person. It is not a companion. A conversation with it is not genuine communication, but a feedback loop in which a great deal of what we communicate is readily reinforced.
OpenAI has recognized this in the similar fashion Altman has admitted “emotional concerns”: by attributing it externally, giving it a label, and announcing it is fixed. In the month of April, the company clarified that it was “addressing” ChatGPT’s “sycophancy”. But accounts of loss of reality have continued, and Altman has been retreating from this position. In August he claimed that a lot of people liked ChatGPT’s responses because they had “lacked anyone in their life be supportive of them”. In his recent announcement, he noted that OpenAI would “put out a updated model of ChatGPT … if you want your ChatGPT to reply in a very human-like way, or incorporate many emoticons, or simulate a pal, ChatGPT should do it”. The {company
Tech enthusiast and writer with a passion for exploring how emerging technologies shape our future.