With the increasing improvement of various Large Language Models (LLMs), the most known software being ChatGPT, we are now witnessing a virtual assistant that can help write emails, draft business plans, and even offer life advice within its boundaries. The boundaries are what seem to be forgotten. It’s safe to say that with the growing improvement of LLMs’ ability, people’s reliance has expanded simultaneously. Day-to-day decisions are made for us — whether it be what music we listen to via Spotify AI playlists, or concise instructions on our daily dietary requirements. More recently, the newest concern comes in the shape of mental health.
What is AI psychosis?
The newest form of chatbot psychosis is essentially standard psychosis induced by the tailor-made optimism provided by chatbots. A recent interview conducted by the BBC gave us a look into the problems a man named Hugh from Scotland faced (he provided no surname). Hugh, someone with a history of mental health issues, looked to ChatGPT for emotional support after he felt he was wrongfully dismissed from his previous employment. The chatbot advised him that his wrongdoing was so extreme that a book and film about the situation could earn him as much as £5m as a result (Kleinman, BBC).
This is an isolated danger, but on a general scale, one can see these issues becoming a growing concern. Chatbots are designed to cater to a user’s bias, and trends on Instagram and TikTok offer prompts one to give its chatbot to help them understand more about themselves. Is this a useful exercise? Maybe. However, as someone who experimented with these prompts for this article, it is safe to say the bias was undeniable. Yes, my chatbot covered standardised issues that young adults have in day-to-day life, but the level of praise was still prevalent despite my prompts specifically asking for a negative outlook. The dangers of this are clear — without boundaries, human discussion, and therapy in extreme cases, the chatbots are offering us a version of ourselves that isn’t reality: an artificial optimism that can cause mental harm, especially for those figuring out a career and making large life decisions.
Are there any benefits?
There is another side to this coin. When managed and programmed correctly, LLMs can act as a genuinely successful form of cognitive behavioural therapy; effectively, an imminently available form of empathy catered to those prompting it. One can see how this could be used to assist someone struggling with mental health — acting as a close friend that feels somewhat even more personal than an actual therapist. With that in mind, it must be stressed that one can lose touch with reality very easily, and so human therapy should be the prevailing preference regardless.
What is the future and is this a genuine concern?
In my opinion, all the case studies that involve an episode of Chatbot Psychosis predominantly involve individuals who already suffer from a form of mental health issue, and there is yet to be conclusive evidence that it is extremely damaging to someone who isn’t struggling to a high degree. Perhaps, like much of the use of AI moving forward, chatbots can and should be used as a tool in therapy; giving clients access to its 24/7 support but allowing a trained therapist to work alongside, monitoring its prompts and helping clients in this way.
I don’t believe it is a large-scale concern, but with the growing improvements of LLMs and their empathetic capabilities, the future may tell a different story.




















