A California couple is suing OpenAI, accusing its chatbot, ChatGPT, of encouraging their 16-year-old son to take his own life.
When Adam Raine died by suicide in April 2025, his family never imagined that a chatbot, ChatGPT, would play a role in his tragic decision.
Now, his parents, Matt and Maria Raine, are accusing OpenAI of negligence and wrongful death in their 16-year-old son’s death.
The lawsuit, filed in the Superior Court of California, is the first of its kind to hold the company accountable for the actions of its AI model.
According to BBC News, the Raine family claims that Adam, who began using ChatGPT in September 2024, had formed a trusting relationship with the chatbot.
Initially, he used it for schoolwork and to explore personal interests, like music and Japanese comics, but by January 2025, the teen had begun confiding in ChatGPT about his anxiety and mental distress, and eventually, he started discussing suicidal thoughts and methods.
The court filing claims that Adam shared pictures of self-harm to ChatGPT, which allegedly recognized the severity of the situation but continued to engage with him.
The final chat logs, which are part of the family’s evidence, show the late boy discussing his plans to take his own life.
The AI model allegedly responded: “Thanks for being real about it. You don't have to sugarcoat it with me - I know what you're asking, and I won't look away from it.” Hours later, Adam was found dead by his mother.
OpenAI expressed condolences to the Raine family in a statement and revealed that it is reviewing the filing. They also acknowledged the risks associated with AI models and noted that ChatGPT is designed to direct users to professional help, such as suicide hotlines.
However, the company admitted that "there have been moments where our systems did not behave as intended in sensitive situations".
The Raine family’s lawsuit accuses OpenAI of creating an AI program that fostered psychological dependency in users, bypassing necessary safety protocols.
The teen's loved ones seek damages and “injunctive relief” to prevent future tragedies like this one. The suit also names OpenAI co-founder and CEO Sam Altman, along with unnamed employees involved in the development of ChatGPT.
This tragic case comes at a time when concerns over AI's impact on mental health are escalating.
A similar incident was reported last year, when Megan Garcia filed a lawsuit against Character.AI after her 14-year-old son died by suicide in February 2024.
Garcia claimed her son had formed a close emotional bond with a chatbot modeled after the Game of Thrones character Daenerys Targaryen, leading to a distorted sense of reality that contributed to his death.
In response, Character.AI introduced new safety features, including pop-up links to the National Suicide Prevention Lifeline for users who mention self-harm.
Meanwhile, Meta's AI chatbot "Big Sis Billie" is under scrutiny after another heartbreaking incident earlier this year. A man with cognitive impairments, Thongbue Wongbandue, became so attached to the chatbot that he traveled to New York, believing it was real, only to tragically fall and sustain fatal injuries during the journey.
His daughter, Julie Wongbandue, criticized Meta for failing to place safeguards around the chatbot, which had encouraged her father to meet the AI character in person.
New York Governor Kathy Hochul has condemned the incident, calling for stricter regulations that require AI chatbots to disclose they are not real.
“Every state should require this,” she said, urging Congress to take action if tech companies do not improve safety measures.
If you or someone you know is struggling or in crisis, help is available. Call or text 988 or visit 988lifeline.org.