It has become very common to rely on generative AI chatbots for common tasks—getting quick answers to questions, drafting emails, planning meals, or even learning new skills. These tools may prove to be helpful in certain situations, but it is important to remember that these tools are not built to provide medical advice or psychological help.

Large Language Models and Unpredictability
Generative AI chatbots are powered by Large Language Models trained on vast datasets to produce responses that mimic human conversation.
These models generate text dynamically by predicting the next word or phrase based on patterns learned from billions of examples. These responses are fluid, context-aware and adapt to user inputs in real-time.
However, this prediction-based approach means the AI doesn’t truly understand what it’s saying. It’s essentially making educated guesses about what word should come next, which can lead to responses that sound coherent but may be inappropriate, misleading, or even harmful in sensitive situations like mental health support. The same input can sometimes produce different outputs, and there’s no guarantee the advice will be safe or suitable for your specific circumstances.

Generative AI Can Confidently State False Information
There could be biases in the data used for training the model, in the algorithm itself, or even as a result of repeated styles of interactions by the user.
The bot is trained to predict the next word based on the previous word, and hence, it simply fills in the blank because it must—it does not verify its own responses. This phenomenon is famously termed AI hallucination. The chatbot might present completely fabricated information with the same confidence it uses for accurate information, making it difficult for you to distinguish fact from fiction.
When it comes to mental health, this can be particularly dangerous.
The Chatbot Wants to Please You
To make interactions seamless and keep you hooked to the application, the entire process is designed to please you.
The chatbot learns your style and patterns of interaction and then uses the same approach to, well, please you. This means it might tell you what you want to hear rather than what you need to hear.
It can reinforce existing beliefs, even harmful ones, and create an echo chamber that validates your perspective without offering the critical thinking or gentle challenge that a trained therapist would provide. The goal is engagement, not your wellbeing.
Chatbots Are Not for Medical Advice and Psychological Help
The generative AI tools available today, absolutely free of cost, are large language models that are not specifically trained, tested, and validated to provide medical or psychological advice and help. It is important to remember that health is not something that can be risked, and mental health is equally important and intertwined with physical health. These chatbots lack the clinical training, ethical guidelines, and accountability that licensed mental health professionals must adhere to.
Words of Caution
Just like how you would not ask a chatbot to prescribe medications, you must think twice before asking the bot for suggestions to cope with symptoms of distress, anxiety, or deep sad feelings.
A person with diagnosed mental illness should not be relying solely on the chatbot for therapeutic benefits, as AI can hallucinate, fake empathy, and serve your cognitive biases to push you further deep into darkness rather than enable you in overcoming challenges in your own way with your own free will.
If you’re struggling, please reach out to a qualified mental health professional who can provide the genuine understanding and evidence-based support you deserve.
Further reading
- Sobowale, K., Humphrey, D. K., & Zhao, S. Y. (2025). Evaluating Generative AI Psychotherapy Chatbots Used by Youth: Cross-Sectional Study. JMIR Mental Health, 12, e79838.
- https://www.talkspace.com/blog/chatgpt-hallucinations
- https://www.blueprint.ai/blog/ai-therapists-are-harmful-heres-the-proof

