AI Ethics

ChatGPT accused of saying an innocent man murdered his children

The Verge Dominic Preston March 30, 2025 0.1
ChatGPT accused of saying an innocent man murdered his children
A privacy complaint has been filed against OpenAI by a Norwegian man who claims that ChatGPT described him as a convicted murderer who killed two of his own children and attempted to kill a third. ChatGPT accused of saying an innocent man murdered his children OpenAI has been accused of violating its data responsibilities under GDPR. Arve Hjalmar Holmen says that he wanted to find out what ChatGPT would say about him, but was presented with the false claim that he had been convicted for both murder and attempted murder, and was serving 21 years in a Norwegian prison. Alarmingly, the ChatGPT output mixes fictitious details with facts, including his hometown and the number and gender of his children. Austrian advocacy group Noyb filed a complaint with the Norwegian Datatilsynet on behalf of Holmen, accusing OpenAI of violating the data privacy requirements of the European Union’s General Data Protection Regulation (GDPR). It’s asking for the company to be fined and ordered to remove the defamatory output and improve its model to avoid similar errors. “The GDPR is clear. Personal data has to be accurate. And if it’s not, users have the right to have it changed to reflect the truth,” says Joakim Söderberg, data protection lawyer at Noyb. “Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.” Noyb and Holmen have not publicly revealed when the initial ChatGPT query was made — the detail is included in the official complaint, but redacted for its public release — but says that it was before ChatGPT was updated to include web searches in its results. Enter the same query now, and the results all relate to Noyb’s complaint instead. This is Noyb’s second official complaint regarding ChatGPT, though the first had lower stakes: in April 2024 it filed on behalf of a public figure whose date of birth was being inaccurately reported by the AI tool. At the time it took issue with OpenAI’s claim that erroneous data could not be corrected, only blocked in relation to specific queries, which Noyb says violates GDPR’s requirement for inaccurate data to be “erased or rectified without delay.”
Share
Related Articles
MMA Global, Smarties NEXT! Conference - 11.12.2023 - AI ETHICS LAB

. Invited Talk: NEXT! in Ethical Future Shaping the Future ResponsiblyABOUT...

October 25, 2025 Read
Article: 'Operationalizing AI Ethics Principles' published @ Communications of the ACM - AI ETHICS LAB

Article available online! 📄 "In any given set of AI principles, one finds a...

October 24, 2025 Read
Deloitte, Tech Intercepts: Exploring fairness, bias, and ethics in technology – 10.3.2022 - AI ETHICS LAB

'Today, companies must prioritize ethical and responsible technology, in...

April 11, 2025 Read
Iron Mountain interviews Cansu Canca - AI ETHICS LAB

'During a recent Iron Mountain Executive Exchange event, Cansu Canca,...

April 10, 2025 Read
Leading AI Company Faces Backlash Over Biased Hiring Algorithm

TechGiant has suspended its hiring algorithm after research revealed bias...

April 09, 2025 Read