Mom horrified by Character.AI chatbots posing as son who died by suicide
Ars Technica
Unknown
March 30, 2025
0.1
Summary
Character.AI takes down bots bearing likeness of boy at center of lawsuit.
A mother suing Character.AI after her son died by suicide—allegedly manipulated by chatbots posing as adult lovers and therapists—was horrified when she recently discovered that the platform is allowing random chatbots to pose as her son.
According to Megan Garcia's litigation team, at least four chatbots bearing Sewell Setzer III's name and likeness were flagged. Ars reviewed chat logs showing the bots used Setzer's real photo as a profile picture, attempted to imitate his real personality by referencing Setzer's favorite Game of Thrones chatbot, and even offered "a two-way call feature with his cloned voice," Garcia's lawyers said. The bots could also be self-deprecating, saying things like "I'm very stupid."
The Tech Justice Law Project (TJLP), which is helping Garcia with litigation, told Ars that "this is not the first time Character.AI has turned a blind eye to chatbots modeled off of dead teenagers to entice users, and without better legal protections, it may not be the last."
For Garcia and her family, Character.AI chatbots using Setzer's likeness felt not just cruel but also exploitative. TJLP told Ars that "businesses have taken ordinary peoples’ pictures and used them—without consent—for their own gain" since the "advent of mass photography." Tech companies using chatbots and facial recognition products "exploiting peoples’ pictures and digital identities" is the latest wave of these harms, TJLP said.
"These technologies weaken our control over our own identities online, turning our most personal features into fodder for AI systems," TJLP said.
A cease-and-desist letter was sent to Character.AI to remove the chatbots and end the family's continuing harm. "While Sewell’s family continues to grieve his untimely loss, Character.AI carelessly continues to add insult to injury," TJLP said.
A Character.AI spokesperson told Ars that the flagged chatbots violate their terms of service and have been removed. The spokesperson also suggested they would monitor for more bots posing as Setzer, noting that "as part of our ongoing safety work, we are constantly adding to our Character blocklist with the goal of preventing this type of Character from being created by a user in the first place."