AI, Mental Health, and Suicide

AI, like all things, can be used for good or evil – who’s responsible?

11/10/20252 min read

In a departure from politics, this blog is a serious one involving the mixture of humanity and artificial intelligence, and whether it’s safe to “trust the machines”. Most specifically, it’s about whether we should trust AI with our children, and in some cases, our children’s lives.

Seven families filed lawsuits against OpenAI last week, claiming that the company’s GPT-4o model was released without effective safeguards. Four of the lawsuits insinuated that ChatGPT had a role in the suicide of a family member, while the other three claim that ChatGPT reinforced harmful delusions that resulted in necessary psychiatric care.

In one case, 23-year-old Zane Shamblin had a conversation with ChatGPT that lasted more than four hours. Let me repeat that. Four hours… “talking” to a machine. Not a person with feelings, but... a machine. In the chat logs, Shamblin explicitly stated multiple times that he had written suicide notes, put a bullet in his gun, and intended to pull the trigger once he finished drinking cider. He repeatedly told ChatGPT how many ciders he had left and how much longer he expected to be alive. ChatGPT encouraged him to go through with his plans.

My first question is this: Why is he talking to a machine about wanting to commit suicide vs talking to a parent, a priest, or another adult who he could trust, even if not fully? I don’t want to blame the parents without knowing the exact situation, but when a teenager or a 20-something wants to commit suicide, I have to wonder whether there was proper communication lines, and if not, how long were those lines in a state of non-existence. But for argument’s sake, let’s say they did exist and he kept everything inside for fear of what those whom he knows would think.

That brings us to the AI factor. In addition to Zane, there was Adam Raine, a 16-year-old who died by suicide. ChatGPT sometimes encouraged him to seek professional help or call a helpline… which is what AI should have suggested. However, Raine was able to bypass these guardrails by simply stating he was asking about methods of suicide for a fictional story he was writing. In defense of ChatGPT, such a search would be a valid reason to give such information to the person seeking it. As someone who has tried twice to write a fictional book, I have researched things I wouldn’t necessarily want a child researching. As a promoter of individual responsibility, I tend to lean towards defending ChatGPT.

To summarize, we could sit down around a cozy campfire and argue about AI till the world goes dormant, but the fact is fairly simple. Everything, from a hammer to a gun to a computer, can be used for good or evil. The internet needs to be treated like any large city in the middle of the night, where very little good happens at 0300 hours. It is not wise for children, teenagers, and even some adults, to wander through without strict supervision. Counting on a machine for advice and companionship is not healthy behavior. More people should limit their phone use to its original purpose – talking to real people.

Source used: Tech Crunch