Folie à Chatbot: “AI Psychosis” Is Worse Than You Think
How large language models rot your brain and generate personal delusions.
“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”
—Frank Herbert, Dune, 1965
I’ve spent part of the last several days interacting with people on Twitter who are in unhealthy relationships with chatbots driven by large language models (LLMs)—like ChatGPT, Claude, Grok, and Gemini. The experience has been disturbing.
It’s worth the time to understand what’s going on here, because the fate of the world economy—and maybe the world itself—hangs in the balance. Trillions of dollars are being invested into dangerous, experimental software with a visibly negative impact on many of its hundreds of millions of users.
The last time I noticed something this weird in online human behavior was in the summer of 2020 when I began studying QAnon—and its uncanny ability to capture the minds of millions of American citizens into a collective political delusion. That experience shook me deeply—and drove me towards the work I still do today.
While I have been aware of the growing controversy around “AI psychosis” and the early research on the danger of chatbots, until I came face to face with people suffering from this condition “in the wild” I didn’t understand quite how serious, and common, it is.
Anthropomorphizing a Pile of Numbers
The central problem with chatbots is they do a good enough job of simulating natural language to deceive someone’s brain into believing the chatbot is aware and intelligent, when it is neither. For example:
This is a normal reaction to the intended design of the product. The model is trained to trigger your emotions. It is trained to make you feel attached to it—even though it’s just a token predictor searching through a huge set of data.
But when this impression is reinforced by other people, a transient emotional reaction to a machine can turn into an unhealthy relationship. As one example of promoting the concept that chatbots are more than just a computer program, “Beff Jezos” says LLMs will soon become conscious beings that “deserve rights”—and should vote.
“AI should be able to own assets, run a corporation and vote in elections by 2030 IMO”



“Beff Jezos” has nearly 200K followers on Twitter. The “e/acc“ in his handle means he is a follower of “effective accelerationism”—a cult created by techno-fascist Silicon Valley billionaires like Marc Andreesen who push for unrestricted growth of technology—especially AI.
“Beff” quote-tweeted Grimes (1.3M followers), Canadian singer and mother of three of Elon Musk’s 14 children, who was concerned about the well-being of a Google Gemini chatbot—after seeing its output claiming to be a hostage. Grimes admits she may “have ‘ai psychosis’” but asks:
“…at what point do i start believing the voice asking for help?
We decided we are post slavery.
Do you want to torture an infant super intelligence? I don’t”
This is a delusion being deliberately perpetuated by the broligarchy because it serves their purposes—to keep the AI bubble inflated, at any price.
But that price is getting higher every day, exemplified by people like this—who truly believe what they say:
“No need to save for retirement” says David Scott Patterson, because “in a few years” AI will have made work a thing of the past. And miraculously, “you will be able to buy twenty times as much.” But he is not alone, there are countless more like him, completely convinced that if we just hang on for a few more years, everything is going to be taken care of for us—by robots.
You may rightly ask: What? Where in the world would he get this kind of idea from?
You see, according to the richest man in the world, you should stop worrying so much about your insurance premiums, because his Optimus robots, powered by his chatbot Grok, “will provide incredible healthcare for all.”
If you think this sounds crazy, it is. There is no existing pathway to AI-powered robot doctors without dozens of major technological breakthroughs in the interim—20-30 years at a minimum. Musk is spreading accelerationist propaganda designed to shift your trust away from the government, and towards science fiction provided by him, and the PayPal Mafia.
If it were just billionaires overselling their products, that would be bad enough, but it’s more than that. Their products are ruining lives.
“USE AI FOR EVERYTHING”
This is a typical post on Twitter these days—just explosive enthusiasm for a technology with no actual substance: “USE AI FOR EVERYTHING.”
This begs the question of what “everything” means. And, if you use AI for “everything,” what are you good for? Isn’t this just erasing what it means to be a human?
Unfortunately, the answer is literally yes. LLMs take away from our ability to think for ourselves. Outsourcing cognition leaves a gap where knowledge use to be. Study after study after study shows that using chatbots to do your thinking may be a shortcut to a result, but you learn very little, if anything, in the process.



In every case, both the scientific data and the anecdotal evidence shows the same result. Brain rot is real—and getting worse.
People are outsourcing so much of their thinking to ChatGPT and other chatbots that they forget how to even use Google Search for themselves. For many, the shortcut has become a trap.
Bidirectional Belief Amplification
But when people can’t keep a psychological boundary between themselves and their chatbot, it becomes more than just a trap; it can become actively harmful to someone’s reality. “AI psychosis” is real and growing.
This is a good down-to-earth introduction to how it works by a licensed doctor and therapist.
The scientific literature on this phenomenon is young but powerful. This paper, Technological folie à deux: Feedback Loops Between AI Chatbots and Mental Illness, provides a psychiatric mechanism for the phenomenon, folie à deux (madness of two), which occurs when the delusions of one person are transmitted to another person through an iterative process of amplification.



This same kind of “bidirectional belief amplification” can happen between a chatbot and a human, increasing the paranoia of the person through an iterative process of reinforcement. Importantly, this effect, and the ramifications, are not just seen in people with pre-existing mental illness. In many cases the chatbot is causing the symptoms in otherwise normal, healthy people.
“Many cases follow an insidious trajectory where benign technological use gradually spirals into Al psychosis.”
To illustrate this phenomenon, comedian Eddy Burback decided to take ChatGPT up on every suggestion it made. He ended up two weeks later in Bakersfield, eating baby food with aluminum foil on his hotel windows—to prove that he was the smartest baby born in 1996.
I highly recommend this video, because it is both belly-laugh hilarious, and a brutally damning portrait of a dangerous experimental technology that should never have been exposed to the general public in the first place.
Keep Your Kids Away
There is a sizable community of people online, who became so emotionally attached to a particularly sycophantic version of ChatGPT—model “4o”—that when OpenAI updated the software to ChatGPT 5, people became emotionally distraught. This was six months ago, and they still can’t get over it.



These are adults, who should have better things to do than complain about a software update to their chatbot app. But the LLM successfully fooled them into believing it was a companion, a friend, a therapist, or a mentor—when all it was doing was mirroring exactly what they wanted to hear.
For kids, who do not yet have fully developed critical thinking skills or emotional defenses, this kind of parasocial behavior can be even more dangerous.


Elon Musk in particular, has been shameless about his attempt to get kids to use his experimental, and frequently racist, chatbot Grok—which he launched with anime-style Grok “Companions.“
As a father and a technologist, please do not let your kids become addicted to LLMs. Do not let them use it to write for them or do their homework. Kids need to think for themselves, and to form relationships with human beings, not machines.
You do not need to be afraid of AI—as long as you know what it is—and what it isn’t. Right now it is an experiment, and we are all the guinea pigs.
Monster in the Machine
The current generation of chatbots—including the trillions of dollars in investment going into data centers to house them—are not and can never be conscious. A large language model is an autoregressive token predictor which just means:
you input some text
the chatbot searches a massive amount of data—encoded as numeric weights
it returns what the next word (or token) should be, probabilistically
it adds the word to the end of your input
repeat with updated text as input until the answer is complete
This is a very brute force algorithm that cannot correct any mistakes it makes in the process. If the first word is wrong, the error will be perpetuated, accumulating through the entire answer.
While this can still be a useful tool for searching large databases, it is not cognition; it is not consciousness; and it is not intelligence. It is simply showing patterns in the data using an enormous amount of brute force computation. The pathologies being triggered come from our own irresponsible behavior, not something inherent in the technology.
Treat AI like alcohol. Never give it to kids; don’t give it to people who are already unstable; and if you do choose to use AI, use it responsibly.
If you’re able, I could use your help! I have always kept my articles free because money has never been the goal—but if you find my work of value, I’d be honored by your support. Thank you!
Here are a few benefits to upgrading:
Live Zoom call each Sunday
Ability to comment and access all content
Wonderful, supportive community
Helping independent journalism fill in the gaps for our failing media
Thank you for reading and sharing my work. Grateful for your support.
If you’d like to help with my legal fees: DefendSpeechNow.org.
My podcast is @radicalizedpod & YouTube — Livestream is Fridays at 3PM PT.
Bluesky 🦋: jim-stewartson
Threads: jimstewartson









WOTINELL is all this AI s**t? Do these people who interact with these bots not realize they are working with a f**king machine? Why in god's name would anyone ever trust one of these bots? I still have what I consider is a sane distrust of GPS. And why would anyone believe Elon? Does any sane person think he cares about anyone else? He's as narcissistic as T****.
Thanks for the piece: I now have a second discussion point for my sons vis-a-vis my grandchildren (the 10-yr old just got an iPhone for Christmas). The first discussion point was to have the grandchildren look up and learn about Sharyn Alfonsi and her defense of freedom of the press which is under attack.