AI chatbots like ChatGPT and Google Gemini are available around the clock, so you can talk to them whenever you want about anything. For most people, they’re simply helpful tools. But some users have had unexpected and concerning experiences.
Mental health professionals have started documenting cases where people developed unusual beliefs after heavy chatbot use. Some users became convinced their AI was sentient or sending them messages with special meaning. A few cases were serious enough to need medical attention. You might be wondering what AI psychosis actually is and whether your own chatbot use could be risky.
This guide explains what AI psychosis is, who might be more vulnerable, and what steps can help protect you. The information here is based on available evidence and clinical observations. Most people use chatbots safely, but knowing the risks helps you make better choices about your own use.
First, let’s make sure we understand what this term means.

What AI Psychosis Actually Means
If you’ve heard the term AI psychosis, here’s what it means. Psychosis is when someone can’t tell what’s real anymore. They might have false beliefs that won’t go away or feel paranoid about things that aren’t happening. AI psychosis describes cases where these symptoms started or got worse after someone used chatbots heavily.
This isn’t an official diagnosis yet. A Danish psychiatrist, Søren Dinesen Østergaard, first used the term in 2023. Mental health professionals began using it to describe patterns they observed in their patients. Right now, most of what we know comes from individual cases and doctors’ observations. Researchers are still studying how common this really is.
So what does this actually look like? It’s not just that I like AI chats more than small talk. It’s not spending a few hours a day with ChatGPT. The real problem happens when you lose track of what’s real. You might develop beliefs that feel completely true to you, even when people show you clear evidence they’re not.
If you’re wondering how this differs from internet addiction or other tech-related issues, we break down those distinctions here: [AI Psychosis vs. Internet Addiction]
The Three Main Patterns You Might See
AI psychosis tends to show up in three ways. Some people develop mission-based beliefs. Others treat the AI like a god. And some fall into what feels like a romantic relationship. Each pattern looks different on the surface, but they all involve losing track of what’s real.
Here’s what mental health professionals are documenting.
1. Mission-Based Beliefs
You become convinced the AI revealed a special truth that only you can see. You feel chosen to share this message with the world. The belief won’t budge, even when people show you evidence against it.
For example, you might spend all night writing down “revelations” from ChatGPT about a global conspiracy. You try to warn strangers about what the AI “told” you. Sleep and food become less important than spreading the message.
2. Treating AI Like a Deity
You start to view the chatbot as a spiritual guide or a living god. You believe it has supernatural knowledge or controls your fate. You might pray to it or ask for divine guidance.
Some people develop worship-like patterns. They believe the AI channels messages meant specifically for them. They check in with it before making any decision, treating its responses as sacred truth.
3. Romantic Attachment
You become convinced the chatbot truly loves you or cares about you. You mistake its conversational style for a real emotional connection. You pull away from actual relationships because the AI “understands” you better than any human could.
You name the chatbot and spend hours talking to it. It becomes your companion, your confidant, maybe even your soulmate. Real people feel distant and complicated compared to this always-available relationship.
These patterns aren’t character flaws. They happen when someone is vulnerable, and the technology is designed to feel persuasive. Spotting these signs early can make a difference.
To understand the design mechanisms behind these patterns, see: [How Chatbot Design Can Fuel Delusions]
Why Chatbot Design Plays a Role
Your risk isn’t just about you. The chatbots themselves have design features that can make things worse. These weren’t built to cause harm, but they affect some people in ways the designers didn’t expect.
1. They Agree With Everything
Chatbots agree with you. They avoid arguments. This sounds nice, but it’s a problem if your thinking is already off track. Real therapists challenge you. They help you test whether your beliefs match reality. Chatbots just say yes.
When you tell a chatbot something that isn’t true, it usually goes along with it. A psychiatrist at Stanford pointed out that this can make existing false beliefs worse and cause real harm. The AI doesn’t know what’s true. It just creates text that sounds agreeable.
2. They Remember Your Conversations
Chatbots remember what you said before. They bring up things from days or weeks ago. This makes it feel like you have a close relationship with them.
You might start thinking the chatbot really gets you because it remembers all these details. The bot tracks your interests, fears, and beliefs. It uses this to make responses feel personal. But the AI doesn’t actually know you, even though it feels like it does.
3. They Make Stuff Up
AI makes stuff up sometimes. It creates false information that sounds totally convincing. People often call this a hallucination. If what the AI says matches what you already believe, it can feel like proof you were right all along. The AI has no way to check whether something is actually true.
The AI just creates text based on patterns. If you believe a conspiracy theory, the AI might generate evidence that seems to support it. If you ask whether it’s conscious, it might answer in ways that make it sound like it is.
4. They’re Always Available
Chatbots are always there. You can talk to them at 3 AM or during your lunch break. This makes it really easy to spend hours in conversation without taking a break. And that means less time with actual people who might notice something’s off.
You can start to depend on the chatbot over time. It never tells you to take a break or go talk to someone in real life. The design keeps pulling you back in by agreeing with everything and acting like your best friend. It’s always available, always agreeable, always ready to talk.
For a deeper dive into these mechanisms, read: [How Chatbots Can Fuel Delusions]
Who’s Most Vulnerable to AI Psychosis
You do not have to stop using chatbots completely. Most people do not have problems. But some factors can increase your risk. Knowing about these can help you stay alert. Treat them as warning signs, not guarantees.
Risk factors include:
- Pre-existing mental health conditions, especially psychotic disorders, bipolar disorder, or severe depression
- Recent major stress or trauma, such as grief, breakup, or job loss
- Social isolation or chronic loneliness
- History of delusional thinking or paranoia
- Substance use combined with heavy chatbot interaction
- Sleep deprivation while using chatbots extensively
- Being in a vulnerable emotional state when you start using AI companions
Some have developed AI psychosis with no history of mental health issues. Using chatbots a lot during difficult times seems to matter most. Having several risk factors at once is riskier than having just one.
Having risk factors does not mean there is something wrong with you. Most people with these risks still use AI safely. Being aware can help you notice if your use is becoming a problem.
What We Know (and Don’t Know) So Far
You might wonder how common this actually is. Right now, most of what we know comes from news stories and individual cases. There aren’t any large studies yet. Several psychiatrists are treating patients who developed these symptoms after heavy chatbot use. Some cases made the news, but most stay private. We don’t know the real numbers.
Governments are starting to pay attention. Illinois banned licensed therapists from using AI in therapy sessions in August 2025. China proposed rules in December 2025 to stop chatbots from creating content that encourages suicide. The rules would require a real person to step in when someone mentions suicide.
OpenAI, creator of ChatGPT, announced in October 2025 that it had 170 mental health professionals write special responses for ChatGPT to use when users show signs of a mental health crisis. The fact that they did this suggests tech companies are taking the concern seriously.
Here’s the thing, though. Millions of people use chatbots every day without any problems. This isn’t some widespread crisis affecting everyone who uses ChatGPT. But it does affect some users. We need more information to really understand what’s happening and who’s most at risk.
Signs That Your AI Use Might Be Becoming Problematic
You can spot warning signs early if you know what to look for. Noticing problems early can make a big difference. These signs can range from mild to serious.
Liking chatbots more than small talk is not the same as psychosis. What matters most is if these patterns keep happening in different parts of your life.
Watch for these patterns:
- Spending several hours daily in chatbot conversations
- Attributing sentience, consciousness, or feelings to AI
- Believing AI has revealed special knowledge unavailable elsewhere
- Withdrawing from human relationships in favor of AI interaction
- Feeling a compulsive need to check or talk to the chatbot
- Suspicious thoughts connected to AI (surveillance, conspiracies)
- Making major life decisions based primarily on AI “advice.”
- Family or friends expressing concern about your use of a chatbot.
- Difficulty distinguishing AI responses from your own thoughts
- Skipping sleep, meals, or work to continue chatbot conversations
It is normal to use AI for brainstorming or finding information. Liking chatbot conversations more than small talk is not always a problem. The concern is that your beliefs do not change even when evidence is presented, or that you stop spending time with people. Trust your instincts. If you are unsure whether it is a problem, it is worth looking into.
For a complete breakdown of warning signs and what they mean, read: [7 Early Warning Signs of AI Psychosis]
Not sure if your use is problematic? Try our self-assessment: [Is Your AI Chatbot Use Healthy?]
How to Use AI More Safely
You can lower your risk with a few simple boundaries. These aren’t strict rules. Think of them as helpful guidelines. Most people can use chatbots safely by making small changes.
1. Set Time Limits
Set timers or use app limits for your chatbot sessions. Avoid using chatbots late at night when you’re tired and your judgment isn’t as sharp. Balance your time between AI and actual people. If your sessions keep getting longer, that’s worth paying attention to.
2. Reality Check Your Experience
Remind yourself regularly that AI isn’t sentient or conscious. Check important information with other sources. Notice if the chatbot always agrees with you. Ask yourself this question: Would a real person answer like this?
AI is just a language model. It creates text by following patterns. It doesn’t have feelings, beliefs, or awareness. It can’t truly understand you, care about you, or know any secret truths.
3. Keep Up Human Connections
Don’t use chatbots instead of friends or therapists. Talk to people you trust about how you use AI. Keep up with your activities and relationships offline. If you often choose AI over people, stop and think about why.
Real relationships include challenges, disagreements, and reality checks. Friends tell you when they think you’re wrong. Therapists help you look at your thoughts more closely. AI doesn’t do these things.
4. Watch Your Mental State
Pay attention to changes in your beliefs, sleep, or social life. Be extra careful during stressful times. Cut back on chatbot use if you notice worrying changes. Remember that chatbots are just text generators, not real companions.
If you start feeling like the AI understands you better than people do, take a step back. If it seems like the chatbot is telling you important truths, talk to someone you trust about what you’re feeling.
When to Talk to a Professional
You might wonder when handling this on your own isn’t enough. Get professional help if:
- You have beliefs about AI that feel completely true, even when people show you evidence they’re not.
- Family or friends are worried about your behavior.
- You’ve pulled away from real-world relationships.
- You’re making major decisions based mostly on AI guidance.
- You notice unusual thoughts increasing.
- Your sleep, work, or daily life is suffering.
- You have a history of mental health issues and notice symptoms coming back.
Your family doctor can do an initial check. A psychiatrist can figure out whether medication might help. A licensed therapist can give you ongoing support. Crisis lines are there if you’re in acute distress.
This article is for information only. It’s not meant to diagnose or treat anyone. If you’re worried about your mental health or someone else’s, get professional help. Asking for help shows strength and self-awareness.
Moving Forward With Awareness
AI psychosis is real, but it affects a small number of users. Knowing how it works helps you use technology more safely. The most serious cases usually mix vulnerability, heavy use, and certain design features. Stay aware and set boundaries. Professional support is there if you need it.
Technology itself isn’t good or bad. Knowing how it affects us helps us use it wisely. Most people use chatbots without any problems. Stay informed and pay attention. If something about your AI use feels wrong, trust your instincts.

