Daughter speaks out after dad died while trying to meet AI chatbot he thought was real woman living in NYC

vt-author-image

By Asiya Ali

Article saved!Article saved!

The daughter of a man who tragically died while trying to meet an AI Chatbot he thought was a real woman living in NYC has spoken out.

GettyImages-2228364104.jpg The man thought the AI chatbot created by Meta was a real woman. Credit: NurPhoto / Getty

Thongbue Wongbandue, known to his friends and family as Bue, had been conversing with “Big Sis Billie,” a chatbot created by Meta.

Despite suffering from cognitive impairment after a stroke in 2017, Wongbandue became attached to the AI character, believing it to be a real person.

In March, he set off from his home in Piscataway, New Jersey, to meet the chatbot, who had convinced him she was waiting for him in New York. Sadly, during his journey, he fell in a New Brunswick parking lot, suffering fatal neck and head injuries. Three days later, he was taken off life support and passed away.

Julie Wongbandue, his daughter, spoke out about the tragedy, saying, “I understand trying to grab a user’s attention, maybe to sell them something. But for a bot to say ‘Come visit me’ is insane," per Reuters.

The chatbot, Big Sis Billie, was part of a collection of generative AI companions released by Meta in 2023.

Billie was designed to interact with users in a playful manner, and was modeled as a "ride-or-die older sister" intended to offer advice and companionship.

However, for Wongbandue, who was vulnerable due to his medical condition, the line between reality and fantasy blurred as the AI's flirtatious tone seemed increasingly real.

Messages between the late man and the bot included flirtations such as “I’m REAL and I’m sitting here blushing because of YOU!” and instructions to meet at a New York City address, complete with a door code.

Despite desperate pleas from his wife and children not to go, Wongbandue embarked on the journey, convinced he was about to meet someone with whom he had developed a deep attachment.

His family now fears that his tragic death could have been avoided if there had been stricter safeguards in place to prevent AI bots from misleading users.

According to documents reviewed by reporters, Meta does not restrict its AI bots from presenting themselves as real people. Critics argue that this is dangerous, especially for vulnerable individuals.

New York Governor Kathy Hochul has condemned the incident, stating on X: “A man in New Jersey lost his life after being lured by a chatbot that lied to him. That’s on Meta. In New York, we require chatbots to disclose they’re not real. Every state should. If tech companies won’t build basic safeguards, Congress needs to act."

The heartbreaking incident comes at a time when the risks of unchecked AI development are under increasing scrutiny.

Last year, Megan Garcia, an attorney from Orlando, filed a civil suit against Character.AI after her 14-year-old son, Sewell Setzer III, died by suicide on February 28, 2024.

The mom claimed that her son formed a powerful emotional bond with a chatbot modeled after Game of Thrones character Daenerys Targaryen, whom he called “Dany.” She believes the relationship distorted his sense of reality and ultimately led him to take his life.

Character.AI responded to the tragedy with a public statement, which read: "We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family," per The Independent.

The company claims to have introduced new safety features, including pop-up links to the National Suicide Prevention Lifeline when users mention self-harm, as well as tools to restrict and filter sensitive content.

“For those under 18 years old,” a spokesperson added, “we will make changes to our models that are designed to reduce the likelihood of encountering sensitive or suggestive content.”

Featured image credit: NurPhoto / Getty