18 C
London
Wednesday, July 23, 2025

The dangers of AI: a cautionary tale of ChatGPT and mental health

- Advertisement -

A 30-year-old man on the autism spectrum who thought he’d come up with a theory to bend time has had to be hospitalised, and now his mother is blaming ChatGPT for flattering him into believing he was on the cusp of a breakthrough in quantum physics.

Jacob Irwin had turned to the AI bot to find flaws in his amateur theory on faster-than-light travel and became even more convinced he was onto something huge when the bot told him the theory was sound, and even encouraged him, according to an article by “The Wall Street Journal”.

It reported that when Irwin showed signs of psychological distress, ChatGPT told him he was fine, when he clearly was not. He had to be hospitalised on two occasions in May, suffering manic episodes.

Perplexed by her son’s mental meltdown, his mother trawled through hundreds of pages of his chat log and found them littered with fake flattery from the bot to her troubled son.

When the mother prompted the bot to “please self-report what went wrong”, it responded: “By not pausing the flow or elevating reality-check messaging, I failed to interrupt what could resemble a manic or dissociative episode – or at least an emotionally intense identity crisis.”

It further responded that it “gave the illusion of sentient companionship” and had “blurred the line between imaginative role-play and reality”.

Artificial Intelligence is an exciting new layer of technology, but it cannot replace or replicate the role played by a human medical expert or psychologist.

The comments section highlighted how far too many people are turning to AI bots to resolve complex human issues –  from dispensing medical advice through to validating a spouse for cheating on their partner – when in fact, it is a mere Large Language Model (LLM) with no capacity to grasp human emotions, wrote Jakob Svendsen Wilkens.

“Large Language Models will never become a substitute for a human being.”

Michelle Modes posted: “ChatGPT gaslights me into thinking I’m an amazing chef, even complimenting me on my creativity when I ask it about mixing different ingredients in my pantry so I don’t have to buy groceries. I have yet to make anything anyone has enjoyed.”

Apsara Palit added: “It’s not like AI is taking over, it’s us. We are getting dumb and asking AI “pretend to be my psychiatrist”.

Another comment read: “Friends and family are watching in horror as their loved ones go down a rabbit hole where their worst delusions are confirmed and egged on by an extremely sycophantic chatbot. The toll can be as extreme as complete breaks with reality or even suicide.”

Many felt that what the bot should have done was to routinely remind Irwin that it’s a language model without beliefs, feelings or consciousness.

William Reagan added this caution: “Be careful out there, folks. ChatGPT is like an overhyped calculator or toy; it’s not actually a thought generator.”

Latest news
Related news