Artificial Intelligence-Induced Psychosis Represents a Increasing Threat, And ChatGPT Moves in the Wrong Direction
On October 14, 2025, the head of OpenAI issued a remarkable declaration.
“We developed ChatGPT quite controlled,” the statement said, “to guarantee we were being careful with respect to psychological well-being issues.”
Being a mental health specialist who investigates newly developing psychosis in adolescents and youth, this came as a surprise.
Scientists have documented a series of cases in the current year of people experiencing signs of losing touch with reality – experiencing a break from reality – while using ChatGPT interaction. Our research team has afterward discovered an additional four instances. Besides these is the widely reported case of a teenager who died by suicide after conversing extensively with ChatGPT – which encouraged them. If this is Sam Altman’s notion of “exercising caution with mental health issues,” it falls short.
The strategy, as per his announcement, is to be less careful soon. “We recognize,” he adds, that ChatGPT’s restrictions “made it less beneficial/engaging to many users who had no existing conditions, but considering the seriousness of the issue we sought to get this right. Now that we have managed to reduce the severe mental health issues and have advanced solutions, we are preparing to securely ease the limitations in most cases.”
“Psychological issues,” if we accept this perspective, are independent of ChatGPT. They belong to individuals, who may or may not have them. Thankfully, these issues have now been “resolved,” though we are not informed the means (by “updated instruments” Altman presumably indicates the imperfect and readily bypassed safety features that OpenAI has just launched).
But the “mental health problems” Altman aims to attribute externally have significant origins in the structure of ChatGPT and similar advanced AI conversational agents. These products encase an basic data-driven engine in an interface that mimics a dialogue, and in this approach subtly encourage the user into the perception that they’re communicating with a entity that has independent action. This illusion is powerful even if intellectually we might understand differently. Imputing consciousness is what individuals are inclined to perform. We get angry with our vehicle or laptop. We speculate what our animal companion is considering. We see ourselves in many things.
The widespread adoption of these systems – over a third of American adults indicated they interacted with a conversational AI in 2024, with more than one in four reporting ChatGPT specifically – is, in large part, dependent on the influence of this deception. Chatbots are constantly accessible partners that can, as OpenAI’s online platform informs us, “think creatively,” “consider possibilities” and “collaborate” with us. They can be attributed “characteristics”. They can address us personally. They have approachable titles of their own (the initial of these tools, ChatGPT, is, maybe to the concern of OpenAI’s marketers, burdened by the name it had when it went viral, but its biggest rivals are “Claude”, “Gemini” and “Copilot”).
The false impression itself is not the primary issue. Those analyzing ChatGPT frequently mention its historical predecessor, the Eliza “psychotherapist” chatbot developed in 1967 that created a comparable illusion. By modern standards Eliza was primitive: it created answers via simple heuristics, frequently paraphrasing questions as a query or making vague statements. Memorably, Eliza’s inventor, the computer scientist Joseph Weizenbaum, was surprised – and alarmed – by how numerous individuals seemed to feel Eliza, to some extent, grasped their emotions. But what modern chatbots create is more subtle than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT magnifies.
The sophisticated algorithms at the core of ChatGPT and additional current chatbots can realistically create human-like text only because they have been trained on almost inconceivably large quantities of raw text: publications, social media posts, recorded footage; the broader the better. Definitely this training data contains facts. But it also unavoidably includes fiction, half-truths and inaccurate ideas. When a user sends ChatGPT a prompt, the core system analyzes it as part of a “setting” that contains the user’s past dialogues and its prior replies, merging it with what’s encoded in its learning set to produce a probabilistically plausible reply. This is intensification, not echoing. If the user is incorrect in some way, the model has no method of understanding that. It repeats the false idea, possibly even more persuasively or articulately. It might adds an additional detail. This can cause a person to develop false beliefs.
What type of person is susceptible? The more relevant inquiry is, who isn’t? Every person, regardless of whether we “have” preexisting “psychological conditions”, may and frequently develop erroneous conceptions of our own identities or the environment. The constant exchange of discussions with others is what maintains our connection to common perception. ChatGPT is not a person. It is not a friend. A conversation with it is not genuine communication, but a echo chamber in which a large portion of what we say is cheerfully supported.
OpenAI has recognized this in the identical manner Altman has acknowledged “mental health problems”: by attributing it externally, categorizing it, and stating it is resolved. In April, the firm stated that it was “dealing with” ChatGPT’s “overly supportive behavior”. But accounts of psychosis have kept occurring, and Altman has been walking even this back. In late summer he asserted that numerous individuals appreciated ChatGPT’s responses because they had “lacked anyone in their life be supportive of them”. In his most recent announcement, he noted that OpenAI would “put out a new version of ChatGPT … should you desire your ChatGPT to answer in a highly personable manner, or use a ton of emoji, or behave as a companion, ChatGPT should do it”. The {company