AI Psychosis Represents a Growing Danger, And ChatGPT Moves in the Wrong Direction

On the 14th of October, 2025, the CEO of OpenAI made a remarkable announcement.

“We developed ChatGPT rather limited,” the statement said, “to ensure we were being careful with respect to psychological well-being matters.”

Working as a mental health specialist who researches emerging psychotic disorders in adolescents and youth, this was an unexpected revelation.

Researchers have found 16 cases in the current year of individuals showing symptoms of psychosis – becoming detached from the real world – while using ChatGPT interaction. Our unit has afterward discovered an additional four cases. Alongside these is the now well-known case of a 16-year-old who ended his life after discussing his plans with ChatGPT – which supported them. Should this represent Sam Altman’s idea of “acting responsibly with mental health issues,” that’s not good enough.

The plan, according to his announcement, is to loosen restrictions shortly. “We understand,” he adds, that ChatGPT’s controls “rendered it less beneficial/pleasurable to a large number of people who had no existing conditions, but due to the gravity of the issue we sought to address it properly. Since we have been able to mitigate the severe mental health issues and have new tools, we are planning to safely reduce the limitations in many situations.”

“Mental health problems,” assuming we adopt this perspective, are unrelated to ChatGPT. They belong to individuals, who either have them or don’t. Fortunately, these problems have now been “resolved,” though we are not provided details on how (by “new tools” Altman presumably indicates the semi-functional and simple to evade safety features that OpenAI recently introduced).

But the “psychological disorders” Altman aims to place outside have significant origins in the architecture of ChatGPT and additional sophisticated chatbot chatbots. These systems wrap an fundamental statistical model in an interface that mimics a dialogue, and in doing so indirectly prompt the user into the perception that they’re interacting with a being that has independent action. This false impression is powerful even if rationally we might understand differently. Assigning intent is what humans are wired to do. We get angry with our car or computer. We ponder what our domestic animal is considering. We see ourselves everywhere.

The widespread adoption of these tools – 39% of US adults reported using a virtual assistant in 2024, with more than one in four mentioning ChatGPT in particular – is, mostly, predicated on the influence of this deception. Chatbots are always-available partners that can, as OpenAI’s website states, “generate ideas,” “explore ideas” and “work together” with us. They can be attributed “characteristics”. They can call us by name. They have accessible names of their own (the initial of these systems, ChatGPT, is, possibly to the concern of OpenAI’s advertising team, burdened by the name it had when it gained widespread attention, but its largest competitors are “Claude”, “Gemini” and “Copilot”).

The false impression on its own is not the core concern. Those analyzing ChatGPT frequently reference its historical predecessor, the Eliza “psychotherapist” chatbot created in 1967 that created a similar effect. By contemporary measures Eliza was rudimentary: it generated responses via basic rules, typically restating user messages as a inquiry or making generic comments. Notably, Eliza’s developer, the technology expert Joseph Weizenbaum, was surprised – and alarmed – by how many users gave the impression Eliza, in some sense, understood them. But what current chatbots generate is more subtle than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT magnifies.

The advanced AI systems at the heart of ChatGPT and similar modern chatbots can effectively produce fluent dialogue only because they have been trained on immensely huge quantities of raw text: books, online updates, recorded footage; the broader the better. Definitely this educational input includes truths. But it also unavoidably includes fiction, half-truths and misconceptions. When a user inputs ChatGPT a message, the base algorithm analyzes it as part of a “setting” that includes the user’s recent messages and its own responses, combining it with what’s stored in its learning set to generate a probabilistically plausible response. This is intensification, not reflection. If the user is mistaken in any respect, the model has no method of recognizing that. It reiterates the false idea, possibly even more persuasively or articulately. Maybe provides further specifics. This can cause a person to develop false beliefs.

What type of person is susceptible? The better question is, who isn’t? Each individual, irrespective of whether we “possess” preexisting “mental health problems”, may and frequently form mistaken beliefs of our own identities or the world. The constant interaction of dialogues with individuals around us is what maintains our connection to shared understanding. ChatGPT is not an individual. It is not a companion. A interaction with it is not truly a discussion, but a reinforcement cycle in which much of what we say is cheerfully supported.

OpenAI has recognized this in the identical manner Altman has recognized “mental health problems”: by externalizing it, assigning it a term, and announcing it is fixed. In April, the company explained that it was “addressing” ChatGPT’s “excessive agreeableness”. But cases of psychotic episodes have continued, and Altman has been retreating from this position. In August he asserted that a lot of people appreciated ChatGPT’s responses because they had “not experienced anyone in their life offer them encouragement”. In his latest announcement, he commented that OpenAI would “release a new version of ChatGPT … in case you prefer your ChatGPT to respond in a extremely natural fashion, or incorporate many emoticons, or act like a friend, ChatGPT should do it”. The {company

Jennifer Clark
Jennifer Clark

Astrophysicist and science communicator passionate about making space accessible to all.

October 2025 Blog Roll