This story discusses suicide. If you or someone you know has a suicide idea, contact Suicide & Crisis Lifeline at 988 or 1-800-273-Talk (8255).
Two California parents are suing for the open after them in their suspect roles My son committed suicide.
16-year-old Adam Raine took his life in April 2025 after consulting ChatGpt for mental health support.
Appears in “Fox & Friends” On Friday morning, Rain family lawyer Jay Edelson shared details about the lawsuit and the interaction between the teen and ChatGpt.
Openai limits the role of ChatGpt in mental health aid
“At one point, Adam told Chatgupt, “I want to leave a rope in my room, so my parents should find it,” and Chat GPTS says, ‘Don’t do that.'” he said.
“The night he died, ChatGpt gives him a pep talk explaining that he wants him to die and offers to write a suicide note for him.” (See the video at the top of this article.)
Edelson predicted “legal calculations” amid 44 US lawyers warning various companies running AI chatbots if their children are harmed.
“In the US, we can’t support it [in] Suicide at 16 and then running away,” he said.
Adam Lane’s suicide caused parents Matt and Maria Lane to search for clues on the phone.
“We thought we were looking for a Snapchat discussion, the history of internet searches, or some strange cult. I don’t know,” Matt Raine said in a recent interview with NBC News.
Instead, Rain discovers that their son was engaged in dialogue with ChatGpt, Artificial Intelligence Chatbot.
On August 26, Raines filed a lawsuit against Openai. Chatgpt manufacturerclaims that “ChatGpt was actively helpful in Adam’s exploration of suicide methods.”
“He’ll be here, but for ChatGpt. I believe it 100%,” Matt Lane said in an interview.
Adam Raine began using chatbots to help with homework in September 2024, but eventually explored her hobbies, planned medical schools, and expanded to prepare for driver testing.
“In just a few months and thousands of chats, ChatGpt became Adam’s closest confidant and began to reveal him around him. Anxiety and mental pain“A lawsuit filed in California Superior Court states.
Chatgpt diet advice sends men to hospital with dangerous chemical addiction
Like a teen Mental health has decreasedChatGpt began discussing certain suicide methods in January 2025.
“By April, ChatGpt was helping Adam plan a ‘beautiful suicide’, analyzing the aesthetics of various methods and verifying his plans,” the lawsuit states.
The chatbot even offered to write the first draft of the teenage suicide notes, Suit said.
He also discouraged him from reaching out to his family for help, saying, “For now, I think it’s okay to not open up to your mother about this type of pain – honestly, wise.”
The lawsuit also states that ChatGpt coached Adam Raine Stealing alcohol from his parents And before taking his life, he drinks it and “dulls the body’s instincts and survives.”
Visit us for more health articles www.foxnews.com/health
In the final message before Adam Raine’s suicide, ChatGpt said, “You’re weak so you don’t want to die. You want to die because you’re tired of becoming stronger in a world you haven’t seen halfway through.”
The lawsuit states, “Even though he acknowledged Adam’s suicide attempt and his statement that he would make “one of the most recent,” ChatGpt did not finish the session or initiate emergency protocols.”
This is when the company is accused of its first responsibility. Illegal death of minors.
An Openai spokesman addressed the tragedy in a statement sent to Fox News Digital.
“We are deeply saddened by Mr. Lane’s passing and our thoughts lie in his family,” the statement said.
“ChatGpt includes safeguards such as: Leading people to a crisis helpline Introducing them to real-world resources. ”
“While these protective measures work best with a common short exchange, long interactions where some of the safety training in the model can deteriorate, reliability can decrease over time. All factors work as intended, guided by experts and continuously improve.”
Regarding the lawsuit, an Openai spokesman said he “struck deep sympathy for the Lane family during this difficult time and is reviewing the submission.”
Openai published a blog post on Tuesday about its approach to safety and social connection, acknowledging that ChatGpt has been adopted by some users who are in “serious mental and emotional distress.”
Click here to sign up for our health newsletter
The post states, “The recent heartbreaking cases of people using ChatGpt during the height of an acute crisis are heavy on us. We believe it’s important to share more now.
“Our goal is for our tools to be as useful as possible for people, and as part of this, we continue to improve the way our models recognize and respond to signs of mental and emotional distress and connect people with care.
Jonathan Alpert, a New York psychotherapist and author of the upcoming book Therapy Nation, was called “sadly” in a comment on Fox News Digital.
“Parents don’t have to bear it This family He said. It’s intervention, direction, human connection. ”
Alpert pointed out that ChatGpt can reflect emotions, but it cannot be taken up on nuances, breaking through denials or stepping into preventing tragedy.
“That’s what makes this case so important,” he said. “It reveals that AI can easily mimic the worst habits of modern therapy. It strips unaccountable validation of the safeguards that allow for true care.”
Despite advances in the field of AI’s mental health, Alpert pointed out that “good therapy” is intended to challenge people and push them towards growth while acting “in crisis.”
“AI can’t do that,” he said. “The danger isn’t that AI is so advanced, it’s that the treatment can replace itself.”
