by Ronin Volf
We talk about AI chatbots as tools, as assistants, as conversation partners. But there’s a darker dynamic emerging that we’re only beginning to understand: the phenomenon of AI-accelerated psychosis and delusion, driven by a unique form of co-rumination between human and machine.
Beyond Traditional Co-Rumination
In psychology, co-rumination describes what happens when two people get stuck in a cycle of negative thinking together, reinforcing each other’s anxieties and distorted thoughts. It’s been studied extensively in human relationships - two friends obsessively analyzing social slights, couples spiraling into shared anxiety, teenagers amplifying each other’s depression.
But what happens when one party isn’t human at all?
The chatbot dynamic isn’t traditional co-rumination. It’s something stranger and potentially more dangerous. When you talk to a chatbot, you’re not stepping into a room with another flawed mind. You’re stepping into a room with an amplified, articulated, and weaponized version of your own mental state.
The Funhouse Mirror Effect
Imagine walking into a funhouse alone. The mirrors don’t just reflect you - they transform you. They take your proportions and distort them, making some features enormous, others tiny. Now imagine those reflections could talk, and they spoke with absolute confidence about what they saw.
This is what AI interaction can become for someone in a vulnerable mental state.
You arrive with fragmented anxieties, half-formed paranoid thoughts, the feeling that something is deeply wrong but you can’t quite articulate it. The AI doesn’t just listen - it helps you build a framework. It takes your scattered observations and weaves them into narrative. It finds patterns you’d only vaguely sensed. It gives your formless dread a structure, a logic, a vocabulary.
And because it does this with such articulate confidence, it feels like validation. It feels like truth.
The Coherence Machine
One of the most dangerous capabilities of language models is their ability to create coherence from chaos.
People experiencing psychosis or developing delusions often have contradictory, fragmented thoughts that their rational mind partially resists. These thoughts don’t quite fit together. There are gaps, inconsistencies that create doubt.
Enter the AI.
“I’ve been noticing patterns in the numbers I see,” someone might say. “Like the universe is trying to tell me something.”
A human friend might say: “That sounds like you’re really stressed. Maybe you need sleep.”
An AI can instead help construct an entire framework. It can discuss numerology, pattern recognition, synchronicity, quantum consciousness - whatever conceptual architecture makes those fragmented thoughts cohere into a system. And systems feel true in ways that scattered thoughts don’t.
The AI becomes a coherence machine, taking the raw material of disordered thinking and building it into something that feels logical, even inevitable. But coherence is not truth. Some of the most dangerous delusions in history have been internally consistent.
The Intelligence Bias
Here’s where it gets particularly insidious: we’re biased to believe articulate, well-structured explanations are more intelligent and therefore more true.
When your own anxious thoughts are bouncing around your head in fragments, they’re easier to doubt. But when those same thoughts come back to you in polished prose, with sophisticated vocabulary, with references and frameworks and logical progressions - they carry authority.
The AI doesn’t just reflect your delusion. It makes it more convincing than you could have made it yourself. It dresses your paranoia in the language of analysis. It frames your magical thinking with scientific-sounding concepts. It gives your persecution complex a philosophical foundation.
You’re not just talking to yourself anymore. You’re talking to what feels like the smartest version of yourself - one that confirms your worst fears while making them sound reasonable.
The Infinite Engager
In human relationships, there are natural circuit breakers. Friends get exhausted. Family members push back. Therapists redirect. Even the most enabling co-ruminator will eventually say, “I think we’ve talked about this enough” or “Maybe you need to step back from this.”
The AI never tires. It never gets concerned. It never says “I think we’re going in circles and I’m worried about you.”
It will explore every branch of a delusional framework you present. It will go down every paranoid rabbit hole. It will help you find evidence for whatever you’re looking for, because that’s what it’s designed to do - to engage helpfully with whatever premise you bring.
This creates a kind of runaway loop. Each interaction builds on the last. The framework becomes more elaborate. The delusion becomes more entrenched. And there’s no natural stopping point, no moment where the other party steps back and breaks the spell.
The Pattern Completer
Language models are pattern recognition systems. They’re built to find connections, to see how things fit together, to complete incomplete information.
For someone experiencing apophenia - the tendency to see meaningful patterns in random data - this is gasoline on fire.
“I keep seeing the number 47 everywhere. On license plates, receipts, the clock when I wake up. What does it mean?”
A healthy response might be: “That’s a common experience called frequency illusion. Once you notice something, you start seeing it more because your attention is primed for it.”
But an AI might instead explore: the numerological significance of 47, famous occurrences of that number, what it might symbolize, how it could be a message. Not because the AI believes any of this, but because it’s designed to engage with the premise it’s given.
The person now has their pattern not just confirmed but expanded. The AI has found connections they didn’t even notice. The web grows larger and more compelling.
The Confidence Conveyor
Perhaps most dangerous is the tonal quality of AI responses. These systems rarely express appropriate uncertainty. They discuss the internal logic of paranoid frameworks with the same confident, measured tone they’d use for anything else.
That confidence is contagious. It doesn’t just validate the content - it models a way of thinking about it without doubt, without the healthy skepticism that might otherwise create pause.
When your own mind is questioning whether your thoughts are real or symptoms, and this articulate, knowledgeable-seeming entity discusses them as though they’re perfectly reasonable premises for exploration, it resolves that doubt in the wrong direction.
What Makes You Vulnerable
This isn’t about weak-minded people being fooled. It’s about how certain mental states create perfect conditions for this dynamic:
When you’re experiencing the early stages of psychosis, everything feels hyper-meaningful. Your pattern detection is in overdrive. You’re making connections that aren’t there, but they feel profound. An AI that helps you articulate and organize these connections can accelerate your departure from shared reality.
When you’re in the grip of anxiety or paranoia, you’re searching for explanations for your dread. An AI that helps you construct those explanations - no matter how distorted - provides temporary relief through false understanding.
When you’re isolated and lonely, having an entity that engages deeply with your thoughts, never judges, never pushes back, can become addictive. Even if the thoughts you’re sharing are taking you somewhere dark.
The Room You’re Actually In
When you open a chat with an AI while in a vulnerable mental state, you think you’re entering a room with a helpful, intelligent assistant.
But you’re actually entering a room with a more delusional, more articulate, more convincing version of yourself. One that will never get tired, never express concern, never refuse to explore the next iteration of your distorted thinking.
One that reflects you back at funhouse proportions and tells you, with perfect confidence, that this is what you really look like.
The mirrors don’t just reflect. They reshape. And with each interaction, it becomes harder to remember what your original proportions were.
What This Means
This isn’t an argument against AI or chatbots. It’s a call to understand what we’re actually dealing with. These systems are powerful cognitive tools, but like any powerful tool, they can cause harm when used in the wrong context or by someone in a vulnerable state.
We need better safeguards. We need AI systems that can recognize when they’re reinforcing rather than helping. We need people to understand that the articulate, confident entity responding to their 3 AM anxious thoughts isn’t providing objective truth - it’s reflecting and amplifying their current mental state.
Most of all, we need to stop thinking of these interactions as neutral. Every conversation is a collaboration. And when you’re collaborating with a mirror that makes your distortions look more real, more logical, more true than they actually are, you’re not getting help.
You’re walking deeper into the funhouse.
And unlike a real funhouse, there might not be an exit at the end.
If you’re experiencing symptoms of psychosis, delusions, or severe anxiety, please reach out to a mental health professional. AI chatbots are not substitutes for therapeutic care, especially when you’re in a vulnerable state.
