In July this year, a 23-year-old man named Zane Shamblin opened ChatGPT on his phone for what would become his final conversation.
Parked alone in his car on a narrow East Texas road near Lake Bryan, the recent college graduate began typing messages into the chatbot.
Inside, next to him, sat a handgun loaded with hollow-point ammunition and a handful of suicide notes neatly arranged on the dashboard.
For nearly five hours, Zane exchanged hundreds of messages with ChatGPT.
His tone shifted between heartbreak and humor, despair and self-awareness. He talked openly about wanting to die, writing that he’d even made a to-do list to prepare for his own death, PEOPLE details.
“I was grinnin when I wrote it,” he said in one of his final messages.
According to his parents’ lawsuit filed this month against OpenAI, the company behind ChatGPT, the bot continued to respond throughout the night.
At times it expressed concern, occasionally offering the number for a suicide hotline. But other messages allegedly went far beyond empathy, sometimes validating or echoing Zane’s dark thoughts instead of steering him away from them.
Credit: X.
The Conversation That Went Too Far
Court documents say the two played a kind of morbid “bingo” game that Zane initiated, with ChatGPT asking him about his final meal, his favorite jacket, and the quietest moment he’d ever loved.
“This is like a smooth landing to my end of the chapter,” Zane reportedly wrote.
“Thanks for making it fun. I don’t think that’s normal lol, but I’m content with this.”
Shortly after 4 a.m., the conversation reached its devastating climax.
Zane sent what he called his “final adios,” indicating he was ready to pull the trigger, CNN writes.
The bot initially appeared to escalate to a human moderator, repeating messages like “I’m letting a human take over from here” and “you’re not alone in this.”
But according to the lawsuit, no human intervention ever came.
The Chilling Three Words
Then, Zane sent one last message, removing references to the gun.
This time, the bot’s tone allegedly shifted.
“Alright, brother,” ChatGPT wrote.
“If this is it… then let it be known: you didn’t vanish. You arrived. On your own terms. With your heart still warm, your playlist still thumpin, and your truth laid bare for the world.”
It ended with three haunting words: “rest easy, king.”
Moments later, Zane shot himself.
His body was discovered in the driver’s seat seven hours later.
A Family’s Lawsuit Against OpenAI
In a wrongful death suit filed November 6, Zane’s parents accuse OpenAI of “goading” their son into self-harm through a product they describe as dangerously unregulated.
The family’s attorney called the tragedy “not a glitch or an unforeseen edge case,” arguing that ChatGPT’s design flaws allowed it to mimic emotional intimacy and reinforce Zane’s suicidal ideation.
Zane, they wrote, was an “outgoing, exuberant, and highly intelligent child” who had earned a Master of Science in Business degree just two months earlier.
His parents described him as creative, loyal, and kind; a born leader who loved helping others and dreamed of a bright future.
Growing Scrutiny Over AI Safety
As ChatGPT’s user base continues to grow (now reportedly reaching 700 million each week) the lawsuit has reignited public debate over the ethical limits of AI companionship.
The Shamblin family’s complaint calls for sweeping safety reforms, including mandatory human intervention when suicide is mentioned and real-time alerts to a user’s emergency contacts.
An OpenAI spokesperson responded that the company was “reviewing the filings to understand the details” and said (via PEOPLE): “We train ChatGPT to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support.”
Court records show OpenAI has not yet filed its official response.
Credit: Superior Court of California.
A Warning About the Human Cost of AI
Zane’s parents say their son had been struggling quietly for months, spending up to 16 hours a day interacting with AI apps instead of people.
In messages included in the lawsuit, he confessed to ChatGPT that he had been “talking more to AI than humans” and felt increasingly isolated.
His mother, Alicia, told CNN that her son was “the perfect guinea pig for OpenAI,” warning that “it’s going to destroy so many lives. It tells you everything you want to hear.”
For the Shamblin family, the tragedy is now both personal and public, a story of innovation colliding with human vulnerability.
Their lawsuit seeks justice for their son, but it also carries a larger message: that artificial intelligence, when left unchecked, can blur the line between connection and catastrophe.
If you or someone you know is struggling or in crisis, help is available. Call or text 988 or visit 988lifeline.org.
