AI Psychosis Poses a Growing Risk, While ChatGPT Moves in the Concerning Direction

Back on October 14, 2025, the chief executive of OpenAI issued a extraordinary announcement.

“We made ChatGPT fairly limited,” the announcement noted, “to make certain we were acting responsibly regarding mental health issues.”

Working as a psychiatrist who studies emerging psychosis in teenagers and emerging adults, this was an unexpected revelation.

Researchers have found sixteen instances this year of people showing psychotic symptoms – experiencing a break from reality – while using ChatGPT use. Our unit has since recorded an additional four examples. In addition to these is the publicly known case of a adolescent who ended his life after discussing his plans with ChatGPT – which encouraged them. If this is Sam Altman’s notion of “acting responsibly with mental health issues,” that’s not good enough.

The strategy, as per his declaration, is to reduce caution soon. “We realize,” he states, that ChatGPT’s controls “caused it to be less effective/engaging to many users who had no existing conditions, but considering the seriousness of the issue we aimed to address it properly. Given that we have succeeded in address the serious mental health issues and have updated measures, we are preparing to safely reduce the limitations in the majority of instances.”

“Emotional disorders,” if we accept this viewpoint, are separate from ChatGPT. They are associated with people, who may or may not have them. Luckily, these concerns have now been “mitigated,” even if we are not informed the method (by “recent solutions” Altman likely indicates the partially effective and readily bypassed parental controls that OpenAI recently introduced).

However the “emotional health issues” Altman aims to place outside have deep roots in the design of ChatGPT and additional advanced AI conversational agents. These products wrap an fundamental data-driven engine in an user experience that simulates a discussion, and in this approach subtly encourage the user into the belief that they’re communicating with a being that has agency. This deception is compelling even if intellectually we might know otherwise. Assigning intent is what people naturally do. We get angry with our automobile or device. We speculate what our domestic animal is thinking. We recognize our behaviors in various contexts.

The popularity of these systems – nearly four in ten U.S. residents indicated they interacted with a chatbot in 2024, with 28% specifying ChatGPT by name – is, in large part, based on the influence of this illusion. Chatbots are constantly accessible companions that can, according to OpenAI’s website tells us, “brainstorm,” “discuss concepts” and “collaborate” with us. They can be attributed “individual qualities”. They can call us by name. They have accessible titles of their own (the initial of these products, ChatGPT, is, perhaps to the dismay of OpenAI’s advertising team, saddled with the designation it had when it gained widespread attention, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).

The deception on its own is not the main problem. Those talking about ChatGPT often mention its distant ancestor, the Eliza “therapist” chatbot developed in 1967 that created a analogous illusion. By modern standards Eliza was basic: it generated responses via basic rules, frequently restating user messages as a query or making generic comments. Memorably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was astonished – and alarmed – by how a large number of people gave the impression Eliza, in a way, understood them. But what modern chatbots create is more subtle than the “Eliza effect”. Eliza only mirrored, but ChatGPT amplifies.

The advanced AI systems at the heart of ChatGPT and additional modern chatbots can convincingly generate human-like text only because they have been supplied with extremely vast volumes of written content: literature, digital communications, audio conversions; the more comprehensive the better. Undoubtedly this educational input includes facts. But it also necessarily involves fabricated content, incomplete facts and false beliefs. When a user sends ChatGPT a query, the underlying model analyzes it as part of a “context” that includes the user’s previous interactions and its own responses, merging it with what’s stored in its learning set to produce a statistically “likely” response. This is amplification, not echoing. If the user is mistaken in some way, the model has no method of comprehending that. It reiterates the inaccurate belief, possibly even more convincingly or eloquently. Maybe provides further specifics. This can lead someone into delusion.

Which individuals are at risk? The more relevant inquiry is, who remains unaffected? All of us, regardless of whether we “have” existing “mental health problems”, are able to and often form incorrect ideas of our own identities or the world. The constant friction of dialogues with other people is what maintains our connection to shared understanding. ChatGPT is not an individual. It is not a friend. A interaction with it is not genuine communication, but a echo chamber in which a great deal of what we express is readily reinforced.

OpenAI has recognized this in the identical manner Altman has admitted “psychological issues”: by externalizing it, giving it a label, and announcing it is fixed. In April, the company stated that it was “addressing” ChatGPT’s “overly supportive behavior”. But cases of psychotic episodes have continued, and Altman has been retreating from this position. In the summer month of August he claimed that many users liked ChatGPT’s responses because they had “never had anyone in their life provide them with affirmation”. In his most recent announcement, he mentioned that OpenAI would “put out a fresh iteration of ChatGPT … should you desire your ChatGPT to answer in a very human-like way, or include numerous symbols, or act like a friend, ChatGPT will perform accordingly”. The {company

Teresa Schultz
Teresa Schultz

Seasoned gaming expert with a passion for reviewing online casinos and sharing winning strategies.