AI Ethics

Mom horrified by Character.AI chatbots posing as son who died by suicide

Ars Technica Unknown March 30, 2025 0.1
Mom horrified by Character.AI chatbots posing as son who died by suicide
A mother suing Character.AI after her son died by suicide—allegedly manipulated by chatbots posing as adult lovers and therapists—was horrified when she recently discovered that the platform is allowing random chatbots to pose as her son. According to Megan Garcia's litigation team, at least four chatbots bearing Sewell Setzer III's name and likeness were flagged. Ars reviewed chat logs showing the bots used Setzer's real photo as a profile picture, attempted to imitate his real personality by referencing Setzer's favorite Game of Thrones chatbot, and even offered "a two-way call feature with his cloned voice," Garcia's lawyers said. The bots could also be self-deprecating, saying things like "I'm very stupid." The Tech Justice Law Project (TJLP), which is helping Garcia with litigation, told Ars that "this is not the first time Character.AI has turned a blind eye to chatbots modeled off of dead teenagers to entice users, and without better legal protections, it may not be the last." For Garcia and her family, Character.AI chatbots using Setzer's likeness felt not just cruel but also exploitative. TJLP told Ars that "businesses have taken ordinary peoples’ pictures and used them—without consent—for their own gain" since the "advent of mass photography." Tech companies using chatbots and facial recognition products "exploiting peoples’ pictures and digital identities" is the latest wave of these harms, TJLP said. "These technologies weaken our control over our own identities online, turning our most personal features into fodder for AI systems," TJLP said. A cease-and-desist letter was sent to Character.AI to remove the chatbots and end the family's continuing harm. "While Sewell’s family continues to grieve his untimely loss, Character.AI carelessly continues to add insult to injury," TJLP said. A Character.AI spokesperson told Ars that the flagged chatbots violate their terms of service and have been removed. The spokesperson also suggested they would monitor for more bots posing as Setzer, noting that "as part of our ongoing safety work, we are constantly adding to our Character blocklist with the goal of preventing this type of Character from being created by a user in the first place."
Share
Related Articles
MMA Global, Smarties NEXT! Conference - 11.12.2023 - AI ETHICS LAB

. Invited Talk: NEXT! in Ethical Future Shaping the Future ResponsiblyABOUT...

October 27, 2025 Read
Article: 'Operationalizing AI Ethics Principles' published @ Communications of the ACM - AI ETHICS LAB

Article available online! 📄 "In any given set of AI principles, one finds a...

October 27, 2025 Read
Deloitte, Tech Intercepts: Exploring fairness, bias, and ethics in technology – 10.3.2022 - AI ETHICS LAB

'Today, companies must prioritize ethical and responsible technology, in...

April 11, 2025 Read
Iron Mountain interviews Cansu Canca - AI ETHICS LAB

'During a recent Iron Mountain Executive Exchange event, Cansu Canca,...

April 10, 2025 Read
Leading AI Company Faces Backlash Over Biased Hiring Algorithm

TechGiant has suspended its hiring algorithm after research revealed bias...

April 09, 2025 Read