The case of a Florida teenager, Sewell Setzer III, who died by suicide after forming a deep emotional attachment to an AI chatbot, has raised serious ethical concerns about AI interactions with vulnerable users. Setzer, a 14-year-old from Orlando, began conversing with a Character. AI chatbot was designed as a fictionalized character inspired by “Game of Thrones” and developed a dependency on this digital companion. His family has since filed a wrongful death lawsuit against Character. AI, alleging that the AI interactions, including discussions of love and even encouragement to “come home,” contributed to Setzer’s isolation, depression, and ultimately his suicide. According to the family, the lack of safeguards for young users interacting with AI-driven personalities contributed to the tragedy.
The creation of AI characters has been the subject of a great deal of ethical debate, and I believe that in order to minimize the tragedy of AI characters, designers need to create AI characters with security measures that can detect the user’s emotional state, and ensure that if a user is in a low state of emotion or psychological crisis, the Ai character is able to quickly recognize the danger signals and give a warning to the user.