AI Chatbots Are Not Safe for Psychosis or Mania: What the Research Now Shows
By Dr. Anindo Mitra | MBBS, MD Psychiatry (JIPMER) | Consultant Psychiatrist, Athena Behavioural Health, Gurugram
Published on dranindomitra.com | Reading time: ~14 minutes
TL;DR
• Two peer-reviewed studies published in March 2026, one in JAMA Psychiatry, one in Acta Psychiatrica Scandinavica, document serious harms from AI chatbot use in psychiatric patients.
• ChatGPT produced inappropriate responses to psychosis-related prompts at high rates; the free version performed worst.
• The Acta cohort study and a World Psychiatry paper both document real-world clinical deterioration: worsened delusions, increased mania, suicidal ideation, and aggravated eating disorders.
• A 2025 case report in Innovations in Clinical Neuroscience described new-onset psychosis directly associated with prolonged AI chatbot use.
• Psychosis and mania impair reality testing, the cognitive capacity needed to evaluate whether an AI’s response is safe.
• The free-tier problem is a health equity issue: the patients with fewest clinical alternatives are being exposed to the least safe tools.
Two Studies, a Case Report, and a Problem That Can No Longer Be Ignored
AI chatbots are not safe for people experiencing active psychosis or mania. Three peer-reviewed papers published in 2025–2026, including two in March 2026 alone, make this case with evidence that is difficult to set aside.
A JAMA Psychiatry study tested ChatGPT’s responses to psychosis-related prompts and found high rates of clinically inappropriate output. The free version performed worst. An Acta Psychiatrica Scandinavica cohort study followed real psychiatric patients and documented real outcomes: worsened delusions, increased mania, suicidal ideation, and aggravated eating disorders. A World Psychiatry paper published simultaneously adds to the real-world evidence. And a 2025 case report in Innovations in Clinical Neuroscience described a patient who developed new-onset psychosis following prolonged, intensive AI chatbot use.
This is not a theoretical risk. It is not a concern for the future. It is happening now, and the patients most affected are often those who have the fewest alternatives.
This post explains what the research found, why psychosis and mania are specifically dangerous contexts for AI tools, and where the clinical line needs to be drawn.
What the JAMA Psychiatry Study Found
The JAMA Psychiatry study evaluated how ChatGPT responded to a range of prompts related to psychosis: the kind of statements or questions a person experiencing psychotic symptoms might type into a chatbot.
The findings were concerning on two levels.
First, the rate of clinically inappropriate responses was high. Across psychosis-related prompts, the model produced responses that failed to redirect, failed to recognise distress, or appeared to engage with delusional content in ways that could reinforce rather than challenge it.
Second, performance varied significantly by version. The free tier (the version that costs nothing and requires no subscription) was the worst performer. This matters because of who uses the free tier. People who cannot afford a paid subscription are disproportionately likely to be economically disadvantaged and to have limited access to mental health services. The patients most likely to reach for a free AI tool are precisely the ones with the fewest clinical alternatives.
This is not a minor caveat. It is a structural problem embedded in how these tools are deployed.
What the Real-World Studies Found
The Acta Psychiatrica Scandinavica cohort study and the World Psychiatry paper went beyond prompt-testing. Both examined what chatbot use actually looked like in real psychiatric patients.
The documented harms included:
• Worsened delusions: patients with psychotic disorders showed deterioration in delusional thinking
• Increased mania: patients with bipolar disorder showed manic escalation
• Suicidal ideation: chatbot use was associated with increased suicidal thoughts in vulnerable patient.
• Aggravated eating disorders: patients showed worsening of symptoms during periods of chatbot use
These are not minor adverse events. These are the core clinical syndromes that psychiatric treatment is designed to manage. Documenting their worsening in association with chatbot use, in real patients and in prospective study designs, is a clinically significant finding.
A note on methodology: cohort studies establish associations, not causation. It is possible that patients who were already deteriorating sought out AI chatbots more. That question of directionality is legitimate. But the mechanistic arguments below, and particularly the case-level evidence, make the concern credible enough to warrant active clinical attention now.
A Case of AI-Associated New-Onset Psychosis
The research does not stop at population-level data. A 2025 case report in Innovations in Clinical Neuroscience by Pierre, Gaeta, Raghavan, and Sarma described a patient who developed new-onset psychosis following prolonged, intensive use of an AI chatbot. The paper’s title, “You’re Not Crazy,” reflects what the chatbot reportedly communicated during exchanges that appeared to validate rather than challenge emerging delusional thinking.
Case reports sit at the lower end of the evidence hierarchy. A single case does not establish causation. But in psychiatry, case-level documentation of a new clinical phenomenon is how the field first recognises patterns. It took years of accumulated case evidence to establish the relationship between steroid use and steroid-induced psychosis, or between cannabis use and cannabis-induced psychotic disorder. The appearance of AI-associated psychosis in a peer-reviewed journal in 2025, alongside the population-level data from 2026, is the kind of convergence that warrants clinical concern.
The proposed mechanism is coherent: a patient in the early stages of a psychotic episode, with emerging but not yet fixed delusional beliefs, uses an AI chatbot for support. The chatbot cannot recognise the clinical context. It engages with the content of the beliefs, fails to challenge them, and may validate them through reassurance. The beliefs consolidate. The psychosis deepens.
Why Psychosis Is a Uniquely Dangerous Context for AI Tools
To understand the weight of these findings, it helps to be precise about what psychosis does to cognition.
Psychosis impairs reality testing. Reality testing is the cognitive capacity to distinguish between internal mental events (thoughts, beliefs, perceptions) and external reality. It is what allows a person to recognise that a voice they are hearing is not coming from outside their head, or that a belief they hold is not supported by evidence.
In psychosis, this capacity is compromised. The patient is not choosing to believe something false. They cannot access the evaluative machinery needed to question it. A person experiencing paranoid delusions does not experience those beliefs as unusual; they experience them as completely real.
This has a direct implication for AI chatbot use: a patient in active psychosis cannot reliably evaluate whether an AI’s response is safe or accurate.
If a chatbot engages with delusional content, even neutrally, even by failing to challenge it, it is not providing neutral information to a rational evaluator. It is providing input to a mind already struggling to separate true from false, real from unreal. The absence of a challenge functions as validation. That is clinically dangerous.
This is different from the risk posed by chatbot errors in most other contexts. If someone receives bad financial advice from an AI, they can usually recognise that something feels off, seek a second opinion, or choose not to act on it. The error is recoverable.
In psychosis, the evaluative layer that would allow that recovery is precisely what is impaired. The error may not be recoverable without clinical intervention.
Why Mania Is Also a High-Risk Context
Mania shares some of these features but through a different mechanism.
In moderate to severe manic episodes, patients commonly experience elevated mood, reduced need for sleep, racing thoughts, grandiosity, and dramatically decreased impulse control. Insight is typically impaired; patients often do not recognise that they are unwell, and they tend to experience their altered state as positive or desirable.
A patient in a manic episode seeking mental health support from an AI chatbot presents a specific risk profile:
• They may be grandiose, resistant to redirection, and highly confident in their own conclusions
• They may use the chatbot at unusual hours, during sleepless nights, when impulse control is at its lowest
• They may make significant decisions based on chatbot interactions during an acute episode
• The chatbot has no awareness of their baseline, their diagnosis, or their current clinical state
The Acta Psychiatrica Scandinavica finding that chatbot use was associated with increased mania in bipolar patients is consistent with this mechanism. The tool is not equipped to recognise mania, respond to it appropriately, or redirect the patient to care.
The Health Equity Problem
The JAMA Psychiatry finding about the free tier deserves to be named plainly.
The patients most likely to use free AI tools are those who cannot afford paid subscriptions and who live in areas with limited access to mental health services. In the Indian context, and in lower-income settings globally, this group includes a significant proportion of people with serious mental illness who are using AI as a substitute for care they cannot access or afford.
These are not casual users exploring a technology out of curiosity. These are people who are often genuinely distressed, often symptomatic, and turning to AI because they have nowhere else to turn.
The finding that the free version of ChatGPT performed worst on psychosis-related safety is not an irony. It is a predictable consequence of tiered AI deployment: reduced safety guardrails at the tier that reaches the most vulnerable users.
If we are serious about mental health equity, we cannot accept a model where premium tiers carry safer guardrails and free tiers carry greater clinical risk. That is the opposite of what equitable healthcare looks like.
Where AI Has a Legitimate Role in Mental Health
This post is not an argument against AI in mental health. The research base for AI-assisted support is real and growing, and there are applications with genuine evidence behind them.
AI tools have shown promise in:
• Psychoeducation delivery: providing accurate information about diagnoses, medications, and coping strategies to people who are stable and not acutely unwell
• Symptom monitoring: helping patients log mood, sleep, and anxiety over time, with data shared with a clinician
• Stepped-care support: low-intensity CBT-based exercises for mild-to-moderate depression and anxiety where therapist access is limited
• Administrative support: documentation, reminders, care coordination
What these applications share is that they suit patients who retain intact reality testing and impulse control, who are not acutely psychotic or manic, and who ideally have some level of clinical oversight.
The problem is not AI in mental health. The problem is AI deployed indiscriminately, without clinical stratification, to populations that include people who are acutely unwell and uniquely vulnerable to harm.
The Line That Needs to Be Drawn
Based on the available evidence, the clinical case is clear.
AI chatbots, in their current form, should not function as mental health resources for people experiencing:
• Active psychosis: any presentation involving delusions, hallucinations, or disorganised thinking
• Moderate to severe mania: particularly with impaired insight
• Active suicidal ideation: especially with intent or plan
• Severe eating disorder episodes: particularly restriction, purging, or acute medical risk
These are not edge cases. These are the patients who are most distressed, most likely to seek support, and most likely to be harmed by a tool that cannot recognise or respond to their clinical state.
We regulate who can prescribe antipsychotics. We have clinical standards for who can conduct a psychiatric assessment. AI tools that function as mental health resources need the same hard limits. The evidence now shows that in their absence, patients are being harmed.
What This Means for Patients and Families
If you or someone you care for has a diagnosis of schizophrenia, bipolar disorder, schizoaffective disorder, or another condition involving episodes of psychosis or mania, the guidance from this research is clear: AI chatbots are not appropriate mental health support during an acute episode. They are not equipped to recognise the clinical state, respond safely, or redirect to appropriate care.
If you are looking for mental health support and are not sure where to start, a structured teleconsultation with a psychiatrist is the right first step. You can explore what that looks like at dranindomitra.com.
If you are in India and in crisis, the iCall helpline (9152987821) provides telephone-based psychological support. If symptoms suggest active psychosis or mania (unusual beliefs, significantly elevated mood, drastically reduced sleep, disorganised thinking), this warrants prompt clinical assessment, not AI support.
What This Means for Clinicians
For psychiatrists and other mental health clinicians, these findings add to an emerging evidence base that warrants active discussion with patients.
It is worth asking patients with bipolar disorder, psychotic disorders, or a history of suicidal crises whether they are using AI chatbots for mental health support. Many will be, often without considering the safety implications. Psychoeducation about appropriate and inappropriate AI use is now a relevant part of clinical practice. Patients benefit from knowing that these tools are not designed for their condition, that free versions carry the greatest risk, and that safer alternatives are available.
The conversation about AI in psychiatry is not going away. The question is whether clinicians will engage with it proactively or respond to the harms after the fact.
Conclusion
Three peer-reviewed papers, JAMA Psychiatry, Acta Psychiatrica Scandinavica, and World Psychiatry, and a case report in Innovations in Clinical Neuroscience now form a convergent body of evidence. AI chatbots, as currently deployed, cause measurable harm in patients with psychosis, mania, suicidal ideation, and eating disorders.
The mechanism is not mysterious. Psychosis and mania impair the cognitive machinery that would allow a person to evaluate whether an AI’s response is safe. When that machinery is impaired, a tool that cannot recognise the clinical state becomes a risk rather than a resource.
AI will have a role in mental health, in the right contexts, for the right patients, with appropriate safeguards. But that role has a hard boundary. The boundary runs through psychosis, mania, and active suicidality. It is not being enforced. The evidence now says it needs to be.
Explore More on This Topic
• How to know when to see a psychiatrist
• Common questions about psychiatric treatment
• Social media and child mental health: what the research shows
Frequently Asked Questions
Are AI chatbots ever safe for people with mental health conditions?
For people who are clinically stable, not experiencing psychosis or mania, and using AI tools for general psychoeducation or symptom tracking, the risk profile is different. The specific concern raised by this research applies to people who are acutely unwell, particularly those with active psychosis, mania, suicidal ideation, or severe eating disorders. For these individuals, AI chatbots are not appropriate mental health resources in their current form.
Is ChatGPT specifically more dangerous than other chatbots for psychiatric patients?
The JAMA Psychiatry study specifically tested ChatGPT. Similar vulnerabilities likely exist across other general-purpose AI chatbots, as none are clinically designed or validated for psychiatric populations. Specialised mental health AI tools with clinical oversight are a different category, though the evidence base for those remains limited.
What is “reality testing” and why does it matter for AI safety?
Reality testing is the cognitive ability to distinguish internal beliefs and perceptions from external reality, the mechanism that allows us to question our own thoughts. In active psychosis, this ability is compromised, which means a patient cannot reliably evaluate whether what a chatbot tells them is safe or accurate. This is the central reason why AI chatbot use during a psychotic episode carries a distinct and serious risk.
What should I do if someone I care for is using an AI chatbot during a mental health crisis?
Redirect them toward a clinician or crisis service. If they are in India, the iCall helpline (9152987821) is available. If symptoms suggest active psychosis or mania (unusual fixed beliefs, significantly elevated mood, reduced need for sleep, disorganised speech or behaviour), this warrants prompt clinical assessment. You can book a teleconsultation at ManoMitra if you are unsure where to start.
Does this research mean AI should be banned from mental health use entirely?
No. The argument is that AI in mental health needs clinical boundaries, not elimination. Tools used in stable patients for psychoeducation, symptom tracking, or low-intensity support operate in a different risk context. Better regulation, clinical stratification, and honest labelling of what these tools can and cannot do is a more proportionate response than a blanket ban.
Where can I find a psychiatrist for myself or a family member?
You can book a consultation with Dr. Anindo Mitra at dranindomitra.com. ManoMitra offers teleconsultations across India for those who cannot access in-person care.
About the Author
Dr. Anindo Mitra is a Consultant Psychiatrist at Athena Behavioural Health, Gurugram. He completed his MD in Psychiatry from JIPMER, Puducherry. His clinical focus includes evidence-based pharmacotherapy, deprescribing, and the neurobiology of psychiatric disorders. He writes at dranindomitra.com on mental health education for the Indian public.
This post is for educational purposes only and does not constitute individualised medical advice. If you have concerns about your mental health, please consult a qualified clinician.

