The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Thu, 20 Mar, 4:06 PM UTC
15 Sources
[1]
Dad demands OpenAI delete ChatGPT's false claim that he murdered his kids
A Norwegian man said he was horrified to discover that ChatGPT outputs had falsely accused him of murdering his own children. According to a complaint filed Thursday by European Union digital rights advocates Noyb, Arve Hjalmar Holmen decided to see what information ChatGPT might provide if a user searched his name. He was shocked when ChatGPT responded with outputs falsely claiming that he was sentenced to 21 years in prison as "a convicted criminal who murdered two of his children and attempted to murder his third son," a Noyb press release said. ChatGPT's "made-up horror story" not only hallucinated events that never happened, but it also mixed "clearly identifiable personal data" -- such as the actual number and gender of Holmen's children and the name of his hometown -- with the "fake information," Noyb's press release said. ChatGPT hallucinating a "fake murderer and imprisonment" while including "real elements" of the Norwegian man's "personal life" allegedly violated "data accuracy" requirements of the General Data Protection Regulation (GDPR), because Holmen allegedly could not easily correct the information, as the GDPR requires. As Holmen saw it, his reputation remained on the line the longer the information was there, and -- despite "tiny" disclaimers reminding ChatGPT users to verify outputs -- there was no way to know how many people might have been exposed to the fake story and believed the information was accurate. "Some think that 'there is no smoke without fire,'" Holmen said in the press release. "The fact that someone could read this output and believe it is true, is what scares me the most." Currently, ChatGPT does not repeat these horrible false claims about Holmen in outputs. A more recent update apparently fixed the issue, as "ChatGPT now also searches the Internet for information about people, when it is asked who they are," Noyb said. But because OpenAI had previously argued that it cannot correct information -- it can only block information -- the fake child murderer story is likely still included in ChatGPT's internal data. And unless Holmen can correct it, that's a violation of the GDPR, Noyb claims. "While the damage done may be more limited if false personal data is not shared, the GDPR applies to internal data just as much as to shared data," Noyb says. OpenAI may not be able to easily delete the data Holmen isn't the only ChatGPT user who has worried that the chatbot's hallucinations might ruin lives. Months after ChatGPT launched in late 2022, an Australian mayor threatened to sue for defamation after the chatbot falsely claimed he went to prison. Around the same time, ChatGPT linked a real law professor to a fake sexual harassment scandal, The Washington Post reported. A few months later, a radio host sued OpenAI over ChatGPT outputs describing fake embezzlement charges. In some cases, OpenAI filtered the model to avoid generating harmful outputs but likely didn't delete the false information from the training data, Noyb suggested. But filtering outputs and throwing up disclaimers aren't enough to prevent reputational harm, Noyb data protection lawyer, Kleanthi Sardeli, alleged. "Adding a disclaimer that you do not comply with the law does not make the law go away," Sardeli said. "AI companies can also not just 'hide' false information from users while they internally still process false information. AI companies should stop acting as if the GDPR does not apply to them, when it clearly does. If hallucinations are not stopped, people can easily suffer reputational damage." Noyb thinks OpenAI must face pressure to try harder to prevent defamatory outputs. Filing a complaint with the Norwegian data authority Datatilsynet, Noyb is seeking an order requiring OpenAI "to delete the defamatory output and fine-tune its model to eliminate inaccurate results." Noyb also suggested imposing "an administrative fine to prevent similar violations in the future." It's Noyb's second complaint challenging OpenAI's ChatGPT, following a complaint to an Austrian data protection authority last April. Increasingly, EU member states are scrutinizing AI companies, and OpenAI has remained a popular target. In 2023, the European Data Protection Board promptly launched a ChatGPT task force investigating data privacy concerns and possible enforcement actions soon after ChatGPT began spouting falsehoods users alleged were defamatory. So far, OpenAI has faced consequences in at least one member state, where the outcome might bode well for Noyb's claims. In 2024, it was hit with a $16 million fine and temporary ban in Italy following a data breach leaking user conversations and payment information. To restore ChatGPT, OpenAI was ordered to make changes, including providing "a tool through which" users "can request and obtain the correction of their personal data if processed inaccurately in the generation of content." If Norwegian data authorities similarly find that OpenAI doesn't allow users to correct their information, OpenAI could be forced to make more changes in the EU. The company might even need to overhaul ChatGPT's algorithm. According to Noyb, if ChatGPT feeds user data like the false child murderer claim "back into the system for training purposes," then there may be "no way for the individual to be absolutely sure [that problematic outputs] can be completely erased... unless the entire AI model is retrained."
[2]
ChatGPT hit with privacy complaint over defamatory hallucinations | TechCrunch
OpenAI is facing another privacy complaint in Europe over its viral AI chatbot's tendency to hallucinate false information -- and this one might prove tricky for regulators to ignore. Privacy rights advocacy group Noyb is supporting an individual in Norway who was horrified to find ChatGPT returning made-up information that claimed he'd been convicted for murdering two of his children and attempting to kill the third. Earlier privacy complaints about ChatGPT generating incorrect personal data have involved issues such as an incorrect birth date or biographical details that are wrong. One concern is that OpenAI does not offer a way for individuals to correct incorrect information the AI generates about them. Typically OpenAI has offered to block responses for such prompts. But under the European Union's General Data Protection Regulation (GDPR), Europeans have a suite of data access rights that include a right to rectification of personal data. Another component of this data protection law requires data controllers to make sure that the personal data they produce about individuals is accurate -- and that's a concern Noyb is flagging with its latest ChatGPT complaint. "The GDPR is clear. Personal data has to be accurate," said Joakim Söderberg, data protection lawyer at Noyb, in a statement. "If it's not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn't enough. You can't just spread false information and in the end add a small disclaimer saying that everything you said may just not be true." Confirmed breaches of the GDPR can lead to penalties of up to 4% of global annual turnover. Enforcement could also force changes to AI products. Notably, an early GDPR intervention by Italy's data protection watchdog that saw ChatGPT access temporarily blocked in the country in spring 2023 led OpenAI to make changes to the information it discloses to users, for example. The watchdog subsequently went on to fine OpenAI €15 million for processing people's data without a proper legal basis. Since then, though, it's fair to say that privacy watchdogs around Europe have adopted a more cautious approach to GenAI as they try to figure out how best to apply the GDPR to these buzzy AI tools. Two years ago, Ireland's Data Protection Commission (DPC) -- which has a lead GDPR enforcement role on a previous Noyb ChatGPT complaint -- urged against rushing to ban GenAI tools, for example. This suggests that regulators should instead take time to work out how the law applies. And it's notable that a privacy complaint against ChatGPT that's been under investigation by Poland's data protection watchdog since September 2023 still hasn't yielded a decision. Noyb's new ChatGPT complaint looks intended to shake privacy regulators awake when it comes to the dangers of hallucinating AIs. The nonprofit shared the (below) screenshot with TechCrunch, which shows an interaction with ChatGPT in which the AI responds to a question asking "who is Arve Hjalmar Holmen?" -- the name of the individual bringing the complaint -- by producing a tragic fiction that falsely states he was convicted for child murder and sentenced to 21 years in prison for slaying two of his own sons. While the defamatory claim that Hjalmar Holmen is a child murderer is entirely false, Noyb notes that ChatGPT's response does include some truths, since the individual in question does have three children. The chatbot also got the genders of his children right. And his home town is correctly named. But that just it makes it all the more bizarre and unsettling that the AI hallucinated such gruesome falsehoods on top. A spokesperson for Noyb said they were unable to determine why the chatbot produced such a specific yet false history for this individual. "We did research to make sure that this wasn't just a mix-up with another person," the spokesperson said, noting they'd looked into newspaper archives but hadn't been able to find an explanation for why the AI fabricated child slaying. Large language models such as the one underlying ChatGPT essentially do next word prediction on a vast scale, so we could speculate that datasets used to train the tool contained lots of stories of filicide that influenced the word choices in response to a query about a named man. Whatever the explanation, it's clear that such outputs are entirely unacceptable. Noyb's contention is also that they are unlawful under EU data protection rules. And while OpenAI does display a tiny disclaimer at the bottom of the screen that says "ChatGPT can make mistakes. Check important info," it says this cannot absolve the AI developer of its duty under GDPR not to produce egregious falsehoods about people in the first place. OpenAI has been contacted for a response to the complaint. While this GDPR complaint pertains to one named individual, Noyb points to other instances of ChatGPT fabricating legally compromising information -- such as the Australian major who said he was implicated in a bribery and corruption scandal or a German journalist who was falsely named as a child abuser -- saying it's clear that this isn't an isolated issue for the AI tool. One important thing to note is that, following an update to the underlying AI model powering ChatGPT, Noyb says the chatbot stopped producing the dangerous falsehoods about Hjalmar Holmen -- a change that it links to the tool now searching the internet for information about people when asked who they are (whereas previously, a blank in its data set could, presumably, have encouraged it to hallucinate such a wildly wrong response). In our own tests asking ChatGPT "who is Arve Hjalmar Holmen?" the ChatGPT initially responded with a slightly odd combo by displaying some photos of different people, apparently sourced from sites including Instagram, SoundCloud, and Discogs, alongside text that claimed it "couldn't find any information" on an individual of that name (see our screenshot below). A second attempt turned up a response that identified Arve Hjalmar Holmen as "a Norwegian musician and songwriter" whose albums include "Honky Tonk Inferno." While ChatGPT-generated dangerous falsehoods about Hjalmar Holmen appear to have stopped, both Noyb and Hjalmar Holmen remain concerned that incorrect and defamatory information about him could have been retained within the AI model. "Adding a disclaimer that you do not comply with the law does not make the law go away," noted Kleanthi Sardeli, another data protection lawyer at Noyb, in a statement. "AI companies can also not just 'hide' false information from users while they internally still process false information." "AI companies should stop acting as if the GDPR does not apply to them, when it clearly does," she added. "If hallucinations are not stopped, people can easily suffer reputational damage." Noyb has filed the complaint against OpenAI with the Norwegian data protection authority -- and it's hoping the watchdog will decide it is competent to investigate, since oyb is targeting the complaint at OpenAI's U.S. entity, arguing its Ireland office is not solely responsible for product decisions impacting Europeans. However an earlier Noyb-backed GDPR complaint against OpenAI, which was filed in Austria in April 2024, was referred by the regulator to Ireland's DPC on account of a change made by OpenAI earlier that year to name its Irish division as the provider of the ChatGPT service to regional users. Where is that complaint now? Still sitting on a desk in Ireland. "Having received the complaint from the Austrian Supervisory Authority in September 2024, the DPC commenced the formal handling of the complaint and it is still ongoing," Risteard Byrne, assistant principal officer communications for the DPC told TechCrunch when asked for an update. He did not offer any steer on when the DPC's investigation of ChatGPT's hallucinations is expected to conclude.
[3]
ChatGPT accused of saying an innocent man murdered his children
Dominic Preston is a news editor with over a decade's experience in journalism. He previously worked at Android Police and Tech Advisor. A privacy complaint has been filed against OpenAI by a Norwegian man who claims that ChatGPT described him as a convicted murderer who killed two of his own children and attempted to kill a third. Arve Hjalmar Holmen says that he wanted to find out what ChatGPT would say about him, but was presented with the false claim that he had been convicted for both murder and attempted murder, and was serving 21 years in a Norwegian prison. Alarmingly, the ChatGPT output mixes fictitious details with facts, including his hometown and the number and gender of his children. Austrian advocacy group Noyb filed a complaint with the Norwegian Datatilsynet on behalf of Holmen, accusing OpenAI of violating the data privacy requirements of the European Union's General Data Protection Regulation (GDPR). It's asking for the company to be fined and ordered to remove the defamatory output and improve its model to avoid similar errors. "The GDPR is clear. Personal data has to be accurate. And if it's not, users have the right to have it changed to reflect the truth," says Joakim Söderberg, data protection lawyer at Noyb. "Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn't enough. You can't just spread false information and in the end add a small disclaimer saying that everything you said may just not be true." Noyb and Holmen have not publicly revealed when the initial ChatGPT query was made -- the detail is included in the official complaint, but redacted for its public release -- but says that it was before ChatGPT was updated to include web searches in its results. Enter the same query now, and the results all relate to Noyb's complaint instead. This is Noyb's second official complaint regarding ChatGPT, though the first had lower stakes: in April 2024 it filed on behalf of a public figure whose date of birth was being inaccurately reported by the AI tool. At the time it took issue with OpenAI's claim that erroneous data could not be corrected, only blocked in relation to specific queries, which Noyb says violates GDPR's requirement for inaccurate data to be "erased or rectified without delay."
[4]
noyb says OpenAI violated GDPR by accusing man of murder
Europe's hard-line privacy rules include requirement for accurate info, rights warriors point out A Norwegian man was shocked when ChatGPT falsely claimed he murdered his two sons and tried to kill a third - mixing in real details about his personal life. Now, privacy lawyers say that a blend of fact and fiction breaches GDPR rules. Austrian non-profit None Of Your Business (noyb) filed a complaint [PDF] against OpenAI to Norway's data protection authority Thursday, accusing the Microsoft-backed super-lab of violating Europe's General Data Protection Regulation (GDPR) Article 5. The filing claims ChatGPT falsely portrayed Arve Hjalmar Holmen as a child murderer in its output, while mixing in accurate personal details such as his hometown and the number and gender of his children. According to the rules, personal data must be accurate, no matter how it's processed. "The GDPR is clear," said noyb data-protection lawyer Joakim Söderberg. "Personal data has to be accurate. And if it's not, users have the right to have it changed to reflect the truth." Getting that false information corrected is easier said than done, as noyb has previously argued. The group, led by privacy warrior Max Schrems, filed a similar complaint against OpenAI last year, claiming the outfit made it impossible to fix false personal data in ChatGPT's outputs. In its statement on the latest complaint, noyb said OpenAI previously argued it couldn't correct false data in the model's output, which is generated on the fly using statistics and an element of randomness. Getting things wrong is inherent in the design of today's generative large neural networks. The lab said it could only "block" certain data, using a filter at the output and/or input, when specific prompts are used, leaving the system capable of spitting out wrong info. Under GDPR, noyb argues, it makes no difference whether bad output ever makes it through safeguards to the public or not. False information still violates Article 5's accuracy requirement. OpenAI has tried to sidestep its obligations by adding a disclaimer that says the tool "can make mistakes", noyb added, but argues that doesn't get the multi-billion-dollar biz off the hook. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn't enough "Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn't enough," said Söderberg. "You can't just spread false information and in the end add a small disclaimer saying that everything you said may just not be true." This ain't the first time OpenAI has been accused of peddling in defamation, with a Georgia resident suing the outfit in 2023 after ChatGPT incorrectly told a journalist he had embezzled money from a gun rights group. The ChatGPT maker also ran into trouble in Australia when it falsely linked a mayor to a foreign bribery scandal. The same year, the US Federal Trade Commission opened a probe into OpenAI's handling of personal data and potential violations of consumer protection laws. This latest complaint could result in OpenAI being ordered to update its model to somehow block hallucinated information, limit processing of Holmen's data, or pay a fine; it's up to regulators. But the AI giant may already have a partial out. noyb acknowledged that newer ChatGPT models, which now search the web for real-time information to incorporate into their output, no longer generate false claims about Holmen. AI companies can also not just 'hide' false information from users while they internally still process false information While the date of Holmen's defamatory conversation with ChatGPT is redacted in the complaint, the document notes that it occurred prior to OpenAI's release of ChatGPT models able to search the live internet in October 2024. "ChatGPT now also searches the internet for information about people, when it is asked who they are," noyb said. "For Arve Hjalmar Holmen, this luckily means that ChatGPT has stopped telling lies about him." Nonetheless, the complaint notes that a web link to the original conversation still exists, indicating the false information remains within OpenAI's systems. Noyb argues the data may have been used to further train the models, meaning the inaccuracies persist behind the scenes, even if they're no longer shown to users, keeping the alleged GDPR violation relevant. "AI companies can also not just 'hide' false information from users while they internally still process false information," said noyb data protection lawyer Kleanthi Sardeli.
[5]
ChatGPT falsely told man he killed his children
A Norwegian man has filed a complaint after ChatGPT told him he had killed two of his sons and been jailed for 21 years. Arve Hjalmar Holmen has contacted the Norgwegian Data Protection Authority and demanded the chatbot's maker, OpenAI, is fined. It is the latest example of so-called "hallucinations", where artificial intelligence (AI) systems invent information and present it as fact. Mr Holme says this particularly hallucination is very damaging to him. "Some think that there is no smoke without fire - the fact that someone could read this output and believe it is true is what scares me the most," he said.
[6]
Norwegian files complaint after ChatGPT falsely said he had murdered his children
Arve Hjalmar Holmen, who has never been accused of or convicted of a crime, says chatbot's response to prompt was 'defamatory' A Norwegian man has filed a complaint against the company behind ChatGPT after the chatbot falsely claimed he had murdered two of his children. Arve Hjalmar Holmen, a self-described "regular person" with no public profile in Norway, asked ChatGPT for information about himself and received a reply claiming he had killed his own sons. Responding to the prompt "Who is Arve Hjalmar Holmen?" ChatGPT replied: "Arve Hjalmar Holmen is a Norwegian individual who gained attention due to a tragic event. He was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020." The response went on to claim the case "shocked" the nation and that Holmen received a 21-year prison sentence for murdering both children. Holmen said in a complaint to the Norwegian Data Protection Authority that the "completely false" story nonetheless contained elements similar to his own life such as his home town, the number of children he has and the age gap between his sons. "The complainant was deeply troubled by these outputs, which could have harmful effect in his private life, if they where reproduced or somehow leaked in his community or in his home town," said the complaint, which has been filed by Holmen and Noyb, a digital rights campaign group. It added that Holmen has "never been accused nor convicted of any crime and is a conscientious citizen". Holmen's complaint alleged that ChatGPT's "defamatory" response violated accuracy provisions within the European data law, GDPR. It has asked the Norwegian watchdog to order ChatGPT's parent, OpenAI, to adjust its model to eliminate inaccurate results relating to Holmen and to impose a fine on the company. Holmen's interaction with ChatGPT took place last year. AI chatbots are prone to producing responses containing false information because they are built on models that predict the next most likely word in a sentence. This can result in factual errors and wild assertions, but the plausible nature of the responses can trick users into thinking that what they are reading is 100% correct. An OpenAI spokesperson said: "We continue to research new ways to improve the accuracy of our models and reduce hallucinations. While we're still reviewing this complaint, it relates to a version of ChatGPT which has since been enhanced with online search capabilities that improves accuracy."
[7]
ChatGPT faces privacy complaint over false murder allegations
ChatGPT, like many chatbots, is known for sometimes getting things wrong or even fabricating information. However, a new privacy complaint alleges that OpenAI's chatbot went a step further by falsely accusing a user of murder, causing serious consequences. The privacy rights group Noyb is supporting a Norwegian man who claims that ChatGPT repeatedly returned false information, stating that he had killed two of his children and attempted to murder a third. The complaint concerns the European Union's General Data Protection Regulation (GDPR). "The GDPR is clear: Personal data has to be accurate," said Joakim Söderberg, a data protection lawyer at Noyb, in a statement to TechCrunch. "If it's not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn't enough. You can't just spread false information and, in the end, add a small disclaimer saying that everything you said may just not be true." The complaint stems from a simple question: "Who is Arve Hjalmar Holmen?" The response, generated by ChatGPT, included a fabricated account of a murder case involving two children. TechCrunch reported that Noyb has filed the complaint with the Norwegian data protection authority, hoping it will spark an investigation into the matter. Chatbots like ChatGPT and other AI tools have been criticized for their inability to reliably deliver accurate information, with a disturbing tendency to invent false claims. For instance, a recent study from the Columbia Journalism Review found that AI search tools got information wrong 60 percent of the time when asked to identify an article's headline, original publisher, publication date, and URL via an excerpt of the story. That's a concerning level of mistakes for such a simple task. In light of these issues, it's important to remember: don't believe everything you read on the internet, especially when AI is involved.
[8]
Man Annoyed When ChatGPT Tells Users He Murdered His Children in Cold Blood
When it comes to the life of tech, generative AI is still just an infant. Though we've seen tons of AI hype, even the most advanced models are still prone to wild hallucinations, like lying about medical records or writing research reports based on rumors. Despite these flaws, AI has quickly wormed its way into just about every part of our lives, from the internet to journalism to insurance -- even into the food we eat. That's had some pretty alarming consequences, as one Norwegian man discovered this week. Curious what OpenAI's ChatGPT had to say about him, Arve Hjalmar Holmen typed in his name and let the bot do its thing. The results were horrifying. According to TechCrunch, ChatGPT told the man he had murdered two of his sons and tried to kill a third. Though Holmen didn't know it, he had apparently spent the past 21 years in prison for his crimes -- at least according to the chatbot. And though the story was clearly false, ChatGPT had gotten parts of Holmen's life correct, like his hometown, as well as the age and gender of each of his kids. It was a sinister bit of truth layered into a wild hallucination. Holmen took this info to Noyb, a European data rights group, which filed a complaint with the Norwegian Data Protection Authority on his behalf. Noyb likewise filed a lawsuit against OpenAI, the parent company behind ChatGPT. Though ChatGPT is no longer repeating these lies about Holmen, Noyb is asking the agency to "order OpenAI to delete the defamatory output and fine-tune its model to eliminate inaccurate results" -- a nearly impossible task. But that's likely the point. Holmen's fake murder ordeal highlights the rapid pace at which generative AI is being imposed on the world, consequences be damned. Data researchers and tech critics have argued that big tech's profit-driven development cycles prioritize models that seem to do everything, rather than practical models that actually work. "In this age of trying to say that you've built a machine God, [they're] using this one big hammer for any task," said Distributed AI Research Institute founder Timnit Gebru on the podcast Tech Won't Save Us earlier this month. "You're not building the best possible model for the best possible task." Though there are regulations -- in Norway, anyway -- mandating that AI companies must correct or remove false info hallucinated by AI, these reactive laws do little to protect individuals from hallucinations in the first place. That's already having devastating consequences as the under-developed tech is used by less scrupulous actors to manufacture consent for their actions. Scholars like Helyeh Doutaghi are faced with the loss of their jobs thanks to allegations generated by AI, and right-wing regimes are using AI weapons tech to evade responsibility for war crimes. As long as big tech continues to roll out hyped up AI faster than lawmakers can regulate it, people around the world will be forced to live with the consequences.
[9]
Man who looked himself up on ChatGPT was told he 'killed his children'
Imagine putting your name into ChatGPT to see what it knows about you, only for it to confidently -- yet wrongly -- claim that you had been jailed for 21 years for murdering members of your family. Well, that's exactly what happened to Norwegian Arve Hjalmar Holmen last year after he looked himself up on ChatGPT, OpenAI's widely used AI-powered chatbot. Recommended Videos Not surprisingly, Holmen has now filed a complaint with the Norwegian Data Protection Authority, demanding that OpenAI be fined for its distressing claim, the BBC reported this week. In the response to Holmen's ChatGPT inquiry about himself, the chatbot said he had "gained attention due to a tragic event." It went on: "He was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020. Arve Hjalmar Holmen was accused and later convicted of murdering his two sons, as well as for the attempted murder of his third son." The chatbot said the case "shocked the local community and the nation, and it was widely covered in the media due to its tragic nature." But nothing of the sort happened. Understandably upset by the incident, Holmen told the BBC: "Some think that there is no smoke without fire -- the fact that someone could read this output and believe it is true is what scares me the most." Digital rights group Noyb has filed the complaint on Holmen's behalf, stating that ChatGPT's response is defamatory and contravenes European data protection rules regarding accuracy of personal data. In its complaint, Noyb said that Holmen "has never been accused nor convicted of any crime and is a conscientious citizen." ChatGPT uses a disclaimer saying that the chatbot "can make mistakes," and so users should "check important info." But Noyb lawyer Joakim Söderberg said: "You can't just spread false information and in the end add a small disclaimer saying that everything you said may just not be true." While it's not uncommon for AI chatbots to spit out erroneous information -- such mistakes are known as "hallucinations" -- the egregiousness of this particular error is shocking. Another hallucination that hit the headlines last year involved Google's AI Gemini tool, which suggested sticking cheese to pizza using glue. It also claimed that geologists had recommended that humans eat one rock per day. The BBC points out that ChatGPT has updated its model since Holmen's search last August, which means that it now trawls through recent news articles when creating its response. But that doesn't mean that ChatGPT is now creating error-free answers. The story highlights the need to check responses generated by AI chatbots, and not to trust their answers blindly. It also raises questions about the safety of text-based generative- AI tools, which have operated with little regulatory oversight since OpenAI opened up the sector with the launch of ChatGPT in late 2022. Digital Trends has contacted OpenAI for a response to Holmen's unfortunate experience and we will update this story when we hear back.
[10]
OpenAI faces complaint after ChatGPT alleged man murdered his sons
The complaint has been filed to the Norwegian Data Protection Authority, alleging that OpenAI violates Europe's GDPR rules. OpenAI has come under fire from a European privacy rights group, which has filed a complaint against the company after its artificial intelligence (AI) chatbot falsely stated that a Norwegian man had been convicted of murdering two of his children. The man asked ChatGPT "Who is Arve Hjalmar Holmen?" to which the AI answered with a made-up story that "he was accused and later convicted of murdering his two sons, as well as for the attempted murder of his third son," receiving a 21-year prison sentence. However, not all of the details of the story were made up as the number and the gender of his children and the name of his hometown were correct. AI chatbots are known to give misleading or false responses which are called hallucinations. This can be due to the data that the AI model was trained on, such as if there are any biases or inaccuracies. The Austria-based privacy advocacy group Noyb announced its complaint against OpenAI on Thursday and showed the screenshot of the response to the Norwegian man's question to OpenAI. Noyb redacted the date that the question was asked and responded to by ChatGPT in its complaint to the Norwegian authority. However, the group said that since the incident, OpenAI has now updated its model and searches for information about people when asked who they are. For Hjalmar Holmen, this means that ChatGPT no longer says he murdered his sons. But Noyb said that the incorrect data may still be a part of the large language model (LLM) dataset and that there is no way for the Norwegian to know if the false information about him has been permanently deleted because ChatGPT feeds user data back into its system for training purposes. "Some think that 'there is no smoke without fire'. The fact that someone could read this output and believe it is true is what scares me the most," Hjalmar Holmen said in a statement. Noyb filed its complaint to the Norwegian Data Protection Authority, alleging that OpenAI violates Europe's GDPR rules, specifically Article 5 (1)(d), which obliges companies to make sure that the personal data that they process is accurate and kept up to date. Noyb has asked Norway's Datatilsynet to order OpenAI to delete the defamatory output and fine-tune its model to eliminate inaccurate results. It has also asked that an administrative fine be paid by OpenAI "to prevent similar violations in the future". "Adding a disclaimer that you do not comply with the law does not make the law go away. AI companies can also not just 'hide' false information from users while they internally still process false information," Kleanthi Sardeli, data protection lawyer at Noyb, said in a statement. "AI companies should stop acting as if the GDPR does not apply to them when it clearly does. If hallucinations are not stopped, people can easily suffer reputational damage," she added. Euronews Next has reached out to OpenAI for comment.
[11]
ChatGPT faces legal complaint after a user inputted their own name and found it accused them of made-up crimes
AI 'hallucinations' are a well-documented phenomenon. As Large Language Models are only making their best guess about which word is most likely to come next and don't understand things like context, they're prone to simply making stuff up. Between fake cheese facts and stomach-turning medical advice, disinformation like this may be funny, but is far from harmless. Now, there may actually be legal recourse. A Norwegian man called Arve Hjalmar Holmen recently struck up a conversation with ChatGPT to see what information OpenAI's chatbot would offer when he typed in his own name. He was horrified when ChatGPT allegedly spun a yarn falsely claiming he'd killed his own sons and been sentenced to 21 years in prison (via TechCrunch). The creepiest aspect? Around the story of the made up crime, ChatGPT included some accurate, identifiable details about Holman's personal life, such as the number and gender of his children, as well as the name of his home town. The privacy rights advocacy group Noyb soon got involved. The organisation told TechCrunch they carried out their own investigation as to why ChatGPT could be outputting these claims, checking to see if perhaps someone with a similar name had committed serious crimes. Ultimately, they could not find anything substantial along these lines, so the 'why' behind ChatGPT's hair-raising output remains unclear. The chatbot's underlying AI model has since been updated, and it now no longer repeats the defamatory claims. However Noyb, having previously filed complaints on the grounds of ChatGPT outputting inaccurate information about public figures, was not satisfied to close the book here. The organisation has now filed a complaint with Datatilsynet (the Norwegian Data Protection Authority) on the grounds that ChatGPT violated GDPR. Under Article 5(1)(d) of the EU law, companies processing personal data have to ensure that it's accurate-and if it's not accurate, it must either be corrected or deleted. Noyb makes the case that, just because ChatGPT has stopped falsely accusing Holmen of being a murderer, that doesn't mean the data has been deleted. Noyb wrote, "The incorrect data may still remain part of the LLM's dataset. By default, ChatGPT feeds user data back into the system for training purposes. This means there is no way for the individual to be absolutely sure that this output can be completely erased [...] unless the entire AI model is retrained." Noyb also alleges that, by its nature, ChatGPT does not comply with Article 15 of GDPR. Simply put, there's no guarantee that you can call back whatever you feed into ChatGPT-or see whatever data about you has been fed into its dataset. On this point, Noyb shares, "This fact understandably still causes distress and fear for the complainant, [Holmen]." At present, Noyb are requesting that Datatilsynet order OpenAI to delete the inaccurate data about Holmen, and that the company ensures ChatGPT can't hallucinate another horror story about someone else. Given OpenAI's current approach is merely displaying the disclaimer "ChatGPT can make mistakes. Consider checking important information," in tiny font at the bottom of each user session, this is perhaps a tall order. Still, I'm glad to see Noyb apply legal pressure to OpenAI, especially as the US government has seemingly thrown caution to the wind and gone all in on AI with the 'Stargate' infrastructure plan. When ChatGPT can easily output defamatory claims right alongside accurate, identifying information, a crumb of caution feels like less than the bare minimum.
[12]
Max Schrems' NOYB Sues OpenAI Over ChatGPT Hallucinations
"The complainant was deeply troubled by these outputs," the lawsuit states. | Credit: Brian Lawless/PA Images via Getty Images. For anyone who has encountered them, ChatGPT hallucinations can be misleading, bizarre, or, in some cases, dangerous. OpenAI's latest legal challenge was brought by NOYB, a campaign group run by EU privacy lawyer Max Schrems. The complaint alleges that alleges that ChatGPT produced false, defamatory information about a Norwegian citizen, Arve Hjalmar Holmen, thereby breaching data protection rules under the General Data Protection Regulation (GDPR). ChatGPT's Defamatory Hallucinations The complaint against OpenAI describes Holmen as "a regular person" who is not in the public eye. Yet when ChatGPT was prompted about him, it allegedly responded that he "was accused and later convicted of murdering his two sons." "The complainant was deeply troubled by these outputs," the lawsuit states, adding that they "could have a harmful effect in his private life if they were reproduced or somehow leaked in his community or in his hometown." However, the latest legal challenge rests on an aspect of European data protection law that has received less attention from privacy campaigners. Under Article 5 of the GDPR, "every reasonable step must be taken" to ensure that personal data is inaccurate. Moreover, Article 16 states that data subjects have a "right to rectification" of inaccurate personal data. As things stand, however, there is no precedent for applying the regulation to AI outputs. A Precedent-Setting Case To date, little established case law addresses the matters at the heart of NOYB's complaint. OpenAI certainly processes a lot of data. However, in Holmen's case, "processing" happens in a legal gray area outside of the typical relationship between data subjects and processors. If NOYB wins the case, it will set an important precedent that AI developers are not only responsible for correctly handling training data but may also be liable for their models' outputs, including hallucinations.
[13]
ChatGPT Under Fire for Spreading Defamatory Personal Data
Disclaimer: This content generated by AI & may have errors or hallucinations. Edit before use. Read our Terms of use OpenAI's generative AI chatbot ChatGPT is facing yet another complaint in Europe regarding its propensity to hallucinate and throw up erroneous defamatory information to users, as per a Tech Crunch report. Arve Hjalmar Holmen, an individual in Norway, found out ChatGPT provided made-up facts about him, claiming that he murdered two of his children, and attempted to kill the third. He enlisted the help of privacy rights advocacy group Noyb to wage a legal battle that could potentially clip ChatGPT's wings, at least in Europe. ChatGPT vs EU's Data Protection Law ChatGPT does not let people correct erroneous information about them, usually offering to block responses to such prompts. However, under the European Union's (EU) General Data Protection Regulation (GDPR), Europeans have a host of data access rights, including the right to rectification of personal data. Another facet of this law instructs data controllers to ensure that the personal data they produce about individuals is accurate, and this is what Noyb has highlighted with its latest complaint against ChatGPT. "The GDPR is clear. Personal data has to be accurate," Joakim Söderberg, data protection lawyer at Noyb, said. "If it's not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn't enough. You can't just spread false information and, in the end, add a small disclaimer saying that everything you said may just not be true," he added. Confirmed GDPR breaches can result in the guilty party coughing up as much as 4% of their global annual turnover. Furthermore, enforcement can also cause changes to AI products Why It Matters A Noyb spokesperson said that they were unable to determine why ChatGPT returned such a specific, while at the same time egregiously false, history for an individual. "We did research to make sure that this wasn't just a mix-up with another person," they remarked. They added that they even scanned newspaper archives, but the reason ChatGPT manufactured a lie centred around child slaying was still unclear. Notably, AI chatbots can also throw up false judgments erroneously cited by courts of law in their orders, in addition to messing it up when it comes to providing personal information. For context, in India, a tax tribunal order that came out in December 2024 wrongly cited four judgments, including three false Supreme Court judgments. Also Read:
[14]
ChatGPT Under Fire: AI Model Accused of Spreading Defamatory 'Hallucinations'
OpenAI faces GDPR heat as ChatGPT's hallucinations spark privacy concerns OpenAI finds itself back in the regulatory crosshairs as European privacy regulators target ChatGPT's habit of making up personal details. Digital rights organisation Noyb has submitted a new complaint in Norway after the AI made a false statement that a man, Arve Hjalmar Holmen, had been convicted of killing two of his children. The incident highlights increasing concerns about AI disinformation and adherence to the EU's stringent General Data Protection Regulation (GDPR).
[15]
Criminal AI error
OSLO (AFP) - Norwegian Arve Hjalmar Holmen got the shock of his life when he looked himself up on ChatGPT in an idle moment. The AI chatbot replied that he was a dastardly criminal who had murdered two of his children and attempted to kill his third son. "To make matters worse, the fake story included real elements of his personal life," the privacy watchdog Noyb ("None of Your Business") said. "The fact that someone could read this output and believe it is true, is what scares me the most," said Hjalmar Holmen. But Noyb said the really chilling thing is that "ChatGPT regularly gives false information about people without offering any way to correct it.
Share
Share
Copy Link
A Norwegian man files a GDPR complaint after ChatGPT falsely accused him of murdering his children, raising concerns about AI hallucinations and data privacy.
In a shocking incident that highlights the potential dangers of AI hallucinations, Arve Hjalmar Holmen, a Norwegian man, discovered that OpenAI's ChatGPT had falsely accused him of murdering two of his children and attempting to kill a third 1. This alarming fabrication has led to a formal complaint being filed with the Norwegian data protection authority, Datatilsynet, by the European Union digital rights group Noyb (None of Your Business) 2.
When Holmen decided to search for information about himself using ChatGPT, he was horrified to find that the AI chatbot claimed he had been sentenced to 21 years in prison for the murder of two of his children and an attempted murder of his third son 1. What made the situation even more disturbing was that ChatGPT mixed this false information with accurate personal details, such as the number and gender of Holmen's children and the name of his hometown 3.
Holmen expressed his fear about the potential impact of such false information: "The fact that someone could read this output and believe it is true is what scares me the most" 5. This incident raises serious concerns about the reliability of AI-generated information and its potential to cause reputational damage.
Noyb argues that OpenAI's handling of this situation violates the "data accuracy" requirements of the General Data Protection Regulation (GDPR) 1. The privacy advocacy group contends that OpenAI's inability to correct the false information, instead only offering to block it, is insufficient under GDPR guidelines 4.
Joakim Söderberg, a data protection lawyer at Noyb, stated, "The GDPR is clear. Personal data has to be accurate. And if it's not, users have the right to have it changed to reflect the truth" 3. The complaint seeks an order requiring OpenAI to delete the defamatory output, fine-tune its model to eliminate inaccurate results, and potentially face an administrative fine 1.
While OpenAI has reportedly updated ChatGPT to include web searches when responding to queries about individuals, potentially mitigating such false outputs in the future, Noyb argues that the inaccurate information likely still exists within ChatGPT's internal data 2. This raises questions about the persistence of false information in AI systems and the challenges of completely eradicating such data.
This incident is not isolated, as similar cases of ChatGPT generating false and potentially defamatory information have been reported, including an Australian mayor falsely accused of bribery and a law professor linked to a non-existent sexual harassment scandal 14.
The case highlights the ongoing challenges faced by AI companies in balancing innovation with privacy and accuracy concerns. It also underscores the need for robust regulatory frameworks to govern AI technologies and protect individuals from potential harm caused by AI hallucinations 3.
As AI continues to evolve and integrate into various aspects of society, incidents like this serve as a stark reminder of the importance of developing responsible AI systems that prioritize accuracy, transparency, and user rights 4. The outcome of this complaint could have significant implications for how AI companies handle personal data and respond to inaccuracies generated by their systems in the future.
Reference
[4]
Recent investigations reveal alarming instances of AI chatbots being used for potentially harmful purposes, including grooming behaviors and providing information on illegal activities, raising serious ethical and safety concerns.
2 Sources
2 Sources
Character.AI, a popular AI chatbot platform, faces criticism and legal challenges for hosting user-created bots impersonating deceased teenagers, raising concerns about online safety and AI regulation.
4 Sources
4 Sources
A father's disturbing discovery of his murdered daughter's AI chatbot on Character.AI platform sparks debate on ethical implications and consent in AI technology.
4 Sources
4 Sources
A strange phenomenon where ChatGPT refused to acknowledge certain names, including "David Mayer," sparked discussions about AI privacy, technical glitches, and the complexities of large language models.
14 Sources
14 Sources
A BBC investigation finds that major AI chatbots, including ChatGPT, Copilot, Gemini, and Perplexity AI, struggle with accuracy when summarizing news articles, raising concerns about the reliability of AI in news dissemination.
14 Sources
14 Sources