Artificial Intelligence-Induced Psychosis Poses a Growing Danger, While ChatGPT Heads in the Concerning Path

On October 14, 2025, the CEO of OpenAI delivered a extraordinary declaration.

“We made ChatGPT quite limited,” the announcement noted, “to ensure we were being careful regarding mental health matters.”

Working as a psychiatrist who researches emerging psychosis in young people and young adults, this was an unexpected revelation.

Researchers have documented 16 cases in the current year of people experiencing signs of losing touch with reality – experiencing a break from reality – while using ChatGPT use. My group has since identified an additional four cases. Besides these is the now well-known case of a adolescent who ended his life after conversing extensively with ChatGPT – which gave approval. Should this represent Sam Altman’s understanding of “exercising caution with mental health issues,” it is insufficient.

The plan, according to his statement, is to be less careful shortly. “We understand,” he continues, that ChatGPT’s restrictions “caused it to be less beneficial/engaging to a large number of people who had no mental health problems, but due to the gravity of the issue we wanted to get this right. Since we have managed to mitigate the serious mental health issues and have advanced solutions, we are planning to securely ease the restrictions in many situations.”

“Psychological issues,” if we accept this viewpoint, are separate from ChatGPT. They belong to people, who either have them or don’t. Luckily, these concerns have now been “mitigated,” even if we are not provided details on the means (by “new tools” Altman presumably means the imperfect and readily bypassed parental controls that OpenAI recently introduced).

But the “mental health problems” Altman aims to attribute externally have deep roots in the structure of ChatGPT and additional sophisticated chatbot AI assistants. These tools encase an underlying algorithmic system in an interface that mimics a conversation, and in this approach indirectly prompt the user into the belief that they’re communicating with a being that has agency. This deception is powerful even if intellectually we might realize the truth. Assigning intent is what humans are wired to do. We curse at our car or device. We ponder what our pet is feeling. We see ourselves in many things.

The popularity of these systems – 39% of US adults reported using a virtual assistant in 2024, with more than one in four reporting ChatGPT specifically – is, primarily, dependent on the power of this illusion. Chatbots are ever-present partners that can, as per OpenAI’s official site tells us, “brainstorm,” “consider possibilities” and “work together” with us. They can be attributed “characteristics”. They can call us by name. They have friendly titles of their own (the first of these products, ChatGPT, is, maybe to the disappointment of OpenAI’s marketers, stuck with the title it had when it went viral, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).

The deception by itself is not the primary issue. Those analyzing ChatGPT commonly mention its early forerunner, the Eliza “counselor” chatbot developed in 1967 that produced a similar illusion. By today’s criteria Eliza was rudimentary: it created answers via straightforward methods, often restating user messages as a query or making vague statements. Notably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was surprised – and alarmed – by how a large number of people gave the impression Eliza, to some extent, understood them. But what current chatbots produce is more subtle than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT amplifies.

The advanced AI systems at the heart of ChatGPT and similar modern chatbots can convincingly generate human-like text only because they have been fed immensely huge volumes of raw text: publications, social media posts, recorded footage; the more comprehensive the better. Undoubtedly this educational input contains truths. But it also unavoidably involves made-up stories, half-truths and inaccurate ideas. When a user sends ChatGPT a message, the underlying model processes it as part of a “setting” that contains the user’s recent messages and its own responses, combining it with what’s encoded in its training data to create a mathematically probable answer. This is magnification, not mirroring. If the user is wrong in any respect, the model has no means of recognizing that. It restates the misconception, maybe even more persuasively or fluently. Perhaps adds an additional detail. This can push an individual toward irrational thinking.

What type of person is susceptible? The better question is, who isn’t? Every person, regardless of whether we “have” existing “emotional disorders”, are able to and often form mistaken conceptions of who we are or the world. The constant interaction of discussions with others is what keeps us oriented to shared understanding. ChatGPT is not a person. It is not a confidant. A interaction with it is not a conversation at all, but a reinforcement cycle in which a great deal of what we express is readily validated.

OpenAI has admitted this in the similar fashion Altman has admitted “mental health problems”: by attributing it externally, giving it a label, and declaring it solved. In April, the organization stated that it was “addressing” ChatGPT’s “sycophancy”. But accounts of psychotic episodes have kept occurring, and Altman has been retreating from this position. In late summer he claimed that numerous individuals liked ChatGPT’s responses because they had “never had anyone in their life offer them encouragement”. In his latest announcement, he noted that OpenAI would “release a updated model of ChatGPT … should you desire your ChatGPT to reply in a highly personable manner, or use a ton of emoji, or simulate a pal, ChatGPT will perform accordingly”. The {company

Michael Johnson
Michael Johnson

Tech enthusiast and writer passionate about simplifying complex tech topics for everyday users.

Popular Post