Curated by THEOUTPOST
On Sat, 14 Dec, 12:06 AM UTC
24 Sources
[1]
Apple urged to stop AI headline summaries after false claims
Press freedom advocates are urgin Apple to ditch an "immature" generative AI system that incorrectly summarized a BBC news notification that incorrectly related that suspected UnitedHealthcare CEO shooter Luigi Mangione had killed himself. Reporters Without Borders (RSF) said this week that Apple's AI kerfuffle, which generated a false summary as "Luigi Mangione shoots himself," is further evidence that artificial intelligence cannot reliably produce information for the public. Apple Intelligence, which launched in the UK on December 11, needed less than 48 hours to make the very public mistake. "This accident highlights the inability of AI systems to systematically publish quality information, even when it is based on journalistic sources," RSF said. "The probabilistic way in which AI systems operate automatically disqualifies them as a reliable technology for news media that can be used in solutions aimed at the general public." Because it isn't reliably accurate, RSF said AI shouldn't be allowed to be used for such purposes, and asked Apple to pull the feature from its operating systems. "Facts can't be decided by a roll of the dice," said Vincent Berthier, head of RSF's tech and journalism desk. "RSF calls on Apple to act responsibly by removing this feature. "The automated production of false information attributed to a media outlet is a blow to the outlet's credibility and a danger to the public's right to reliable information on current affairs," Berthier added. It's unknown if or how Apple plans to address the issue. The BBC has filed its own complaint, but Apple declined to comment to the British broadcaster publicly on the matter. According to the BBC, this doesn't even appear to be the first time Apple's AI summaries have falsely reported news. The beeb pointed to an Apple AI summary from November shared by a ProPublica reporter that attributed news of Israeli prime minister Benjamin Netanyahu's arrest (which hasn't happened) to the New York Times, suggesting Apple Intelligence might be a serial misreader of the daily headlines. Google's AI search results have also been tricked into surfacing scam links, and have also urged users to glue cheese to pizza and eat rocks. Berthier stated, "The European AI Act - despite being the most advanced legislation in the world in this area - did not classify information-generating AIs as high-risk systems, leaving a critical legal vacuum. This gap must be filled immediately." The Register has reached out to Apple to learn about what it might do to address the problem of its AI jumping to conclusions about the news, and RSF to see if it's heard from Apple, but we haven't heard back from either. ®
[2]
Apple faces criticism after shockingly bad Apple Intelligence headline errors
Apple is under fire after a recent text notification, attributed to BBC News, falsely claimed that Luigi Mangione, the accused in the murder of a prominent healthcare insurance CEO in New York, had shot himself. The shocking and false headline was generated using Apple Intelligence, which uses AI to summarize news notifications. In reality, the event did not occur, yet soon after the summary was delivered, social media was already buzzing, spreading the false news rapidly. When it was confirmed that the AI-generated summary mistakenly issued details of the high-profile murder case, it sparked concern over the accuracy of Apple's news summary feature. The BBC has formally complained to Apple, requesting corrective measures to prevent such errors from recurring, further underscoring the importance of accountability. The media outlet's site states its editorial values, "The trust that our audience has in all our content underpins everything that we do. We are independent, impartial and honest. We are committed to achieving the highest standards of accuracy and impartiality and strive to avoid knowingly or materially misleading our audiences." Media outlets invest heavily in maintaining their credibility, and errors made by third-party platforms threaten to erode that trust. Because misinformation can spread rapidly online, it is highly critical that automated news notifications are accurate. Apple has yet to respond publicly to the BBC's complaint. However, this incident is not the first time Apple Intelligence has faced criticism for spreading misinformation through its AI-powered summaries. On November 21, a notification attributed to the New York Times inaccurately suggested that Israeli Prime Minister Benjamin Netanyahu had been arrested. The actual story concerned the International Criminal Court issuing an arrest warrant for Netanyahu, but the AI summary significantly distorted the facts. The New York Times has chosen not to comment on the incident. RSF is calling for a ban on the generative AI feature altogether. Reporters Without Borders made a statement on their site saying they are concerned about the risks of AI tools regarding false news alerts. The organization believes they are still too new to be used in reporting the news. RSF's Head of Technology and Journalism Desk said on the site, "AIs are probability machines, and facts can't be decided by a roll of the dice. RSF calls on Apple to act responsibly by removing this feature. The automated production of false information attributed to a media outlet is a blow to the outlet's credibility and a danger to the public's right to reliable information on current affairs. The European AI Act -- despite being the most advanced legislation in the world in this area -- did not classified information-generating AIs as high-risk systems, leaving a critical legal vacuum. This gap must be filled immediately." The issues with Apple Intelligence have raised broader concerns about the reliability of artificial intelligence in handling sensitive information. AI-driven tools, while designed to streamline and enhance user experiences, often struggle with context and nuance -- key elements in accurate reporting. When trusted news sources are misrepresented through these errors, the potential for public misunderstanding grows exponentially. The problem also highlights the broader implications of integrating AI into news delivery. As technology companies like Apple continue to adopt AI for content curation, there is growing pressure to ensure these systems are adequately tested and monitored. News organizations, for their part, are beginning to push back against errors that could damage their reputations. This incident serves as a warning about the risks of relying too heavily on artificial intelligence for content delivery, raising the question: Does the risk of misinformation outweigh the convenience of automated news summaries? While AI holds promise to improve efficiency and accessibility, its limitations highlight the enduring need for human oversight in journalism. As Apple faces mounting pressure to address the flaws in Apple Intelligence, the debate over the role of AI in news media is likely to intensify.
[3]
Journalism group urges Apple to disable AI summaries after fake headline incident
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. What just happened? Just days after Apple's AI-powered notification summary tool pushed out a false BBC headline about Luigi Mangione, a major trade body is urging the company to remove the feature completely. It marks the latest setback in Apple's attempts to convince its customers that its AI is worth using. On December 13, Apple Intelligence, which has a history of making significant errors when it comes to summarizing notifications, pushed out a summary of several BBC headlines that included the claim Mangione had shot himself. The incident happened just 48 hours after Apple Intelligence launched in the UK. Mangione, who has been arrested and charged with the first-degree murder of UnitedHealthcare CEO Brian Thompson in New York, has not shot himself. He remains in custody at Huntingdon State Correctional Institution in Huntingdon County, Pennsylvania. It certainly isn't the first time Apple Intelligence has gotten a summary notification wrong. It previously claimed that Israeli prime minister Benjamin Netanyahu had been arrested after the International Criminal Court issued an arrest warrant. Reporters Without Borders (RSF), an international non-profit NGO that focuses on safeguarding the right to freedom of information, is calling on Apple to disable the notification summary feature. The RSF writes that the incident illustrates how generative AI services are still too immature to produce reliable information for the public, and should not be allowed on the market for such uses. The group added that the probabilistic way in which AI systems operate automatically disqualifies them as a reliable technology for news media. "AIs are probability machines, and facts can't be decided by a roll of the dice. RSF calls on Apple to act responsibly by removing this feature. The automated production of false information attributed to a media outlet is a blow to the outlet's credibility and a danger to the public's right to reliable information on current affairs," said Vincent Berthier, Head of RSF's Technology and Journalism Desk. "The European AI Act - despite being the most advanced legislation in the world in this area - did not classify information-generating AIs as high-risk systems, leaving a critical legal vacuum. This gap must be filled immediately." The BBC contacted Apple when it learned of the false headline to raise concerns and ask the company to fix the problem. The summary notification in question showed three headlines: the fake one about Mangione, and two correct headlines, about the overthrow of Bashar al-Assad's regime in Syria and an update on South Korean President Yoon Suk Yeol. Apple says its AI's summarization ability allows users to scan long or stacked notifications with key details right on the Lock Screen, such as when a group chat is particularly active. It often gets the summaries wrong or fails to understand their context, sometimes in hilarious fashion. Spewing out false news headlines is always going to get Apple in trouble, especially at a time when companies are trying to convince people that AI is the future of pretty much everything. Apple has yet to respond to the incident, but don't expect it to permanently remove the feature. At most, Cupertino might disable it for a while.
[4]
An Apple AI Blunder Messed Up Headline Summaries So Badly Some Want the Feature Pulled
Vincent Berthier, the technology and journalism desk chief for journalism advocacy group Reporters Without Borders, called for Apple to "act responsibly by removing this feature," CNN reported. Berthier summarized the issue in a neat soundbite: "A.I.s are probability machines, and facts can't be decided by a roll of the dice." More seriously Reporters Without Borders highlighted that it is "very concerned" about the risk AI tech poses to news reporting, alleging that the tech is "too immature" to be relied on to convey correct information to the general public. The BBC itself contacted Apple, and in a statement said that it was "essential" to the august news body that its "audiences can trust any information or journalism published in our name and that includes notifications." Apple has reportedly not responded when asked for comments. The news summarizing tech aligns with many of the time- and effort-saving promises of current-gent AI tech. As CNN notes, Apple has promoted its ability to streamline specific content into "a digestible paragraph, bulleted key points, a table, or a list," and lets users group news notifications into a single push notification. The big tech giant is going all-in on AI technology, launching a splashy effort under the "Apple Intelligence" brand in the summer, and releasing a clutch of new iPhone models centered around AI systems in September. Cautious about the risks that AI tech embodies, and the fact that some AI systems use users' data to train their algorithms in ways that may leak sensitive information, Apple is making privacy a key part of its AI push.
[5]
Apple Intelligence caused chaos with a false murder report
According to a BBC report, Apple's newly released AI platform, Apple Intelligence, faced backlash after incorrectly announcing the death of Luigi Mangione, a murder suspect. The notification, sent via iPhone last week, summarized a BBC report incorrectly, leading to criticism from both users and media organizations. This incident raises significant questions about the reliability and accuracy of AI-generated information. Apple has integrated Apple Intelligence into its operating systems, including iOS 18 and macOS Sequoia. Among its features, the platform offers generative AI tools designed for writing and image creation. Additionally, it provides a function to categorize and summarize notifications, aimed at reducing user distractions throughout the day. However, the recent false notification highlights potential inaccuracies and shortcomings of this new feature, leaving users perplexed and concerned about the integrity of information being shared. In a notification dated December 13, 2024, iPhone users received a message stating, "Luigi Mangione shoots himself," alongside two other breaking news summaries. This erroneous notification quickly drew attention, particularly because it misreported a crucial detail regarding Mangione, who is accused of killing former UnitedHealthcare CEO Brian Thompson on December 4. BBC, which had not published any information about Mangione allegedly shooting himself, lodged a complaint with Apple. The network has since called for Apple to reconsider its generative AI tool. The underlying issue appears to stem from the language models (LLMs) utilized by Apple Intelligence. According to The Street, Komninos Chatzipapas, Director at Orion AI Solutions, "LLMs like GPT-4o... don't really have any inherent understanding of what's true and what's not." These models statistically predict the next words based on vast datasets, yet this method can result in reliable-sounding content that misrepresents facts. In this case, Chatzipapas speculated that Apple may have inadvertently trained its summarization model with similar examples where individuals shot themselves, yielding the misleading headline. The implications of this incident extend beyond Apple's internal practices. Reporters Without Borders has urged the company to remove the summarization feature, stressing the severity of disseminating inaccurate information tied to reputable media outlets. Vincent Berthier from Reporters Without Borders articulated concerns about AI-generated misinformation damaging the credibility of news sources. He stated, "A.I.s are probability machines, and facts can't be decided by a roll of the dice." This reinforces the argument that AI models currently lack the maturity necessary to ensure reliable news dissemination. Don't allow AI to profit from the pain and grief of families This incident is not isolated. Since launching Apple Intelligence in the U.S. in June, users have reported further inaccuracies, including a notification that inaccurately stated Israeli Prime Minister Benjamin Netanyahu had been arrested. Although the International Criminal Court issued an arrest warrant for Netanyahu, the notification omitted crucial context, only displaying the phrase "Netanyahu arrested." The controversy surrounding Apple Intelligence reveals challenges related to the autonomy of news publishers in the age of AI. While some media organizations actively employ AI tools for writing and reporting, users of Apple's feature receive summaries that may misrepresent facts, all while appearing under the publisher's name. This principle could have broader implications for how information is reported and perceived in the digital age. Apple has not yet responded to inquiries regarding its review process or any steps it intends to take in relation to the BBC's concerns. The current landscape of AI technology, shaped significantly by the introduction of platforms like ChatGPT, has spurred rapid innovation, yet it also creates a fertile ground for inaccuracies that can mislead the public. As investigations continue into AI-generated content and its implications, the significance of ensuring reliability and accuracy becomes increasingly paramount.
[6]
Apple Faces Criticism Over AI-Generated News Headline Summaries
Apple is facing calls to remove its AI-powered notification summaries feature after it generated false headlines about a high-profile murder case, drawing criticism from a major journalism organization. Reporters Without Borders (RSF) has urged Apple to disable the Apple Intelligence notification feature, which rolled out globally last week as part of its iOS 18.2 software update. The request comes after the feature created a misleading headline suggesting that murder suspect Luigi Mangione had shot himself, incorrectly attributing the false information to BBC News. Mangione in fact remains under maximum security at Huntingdon State Correctional Institution in Huntingdon County, Pennsylvania, after having been charged with first-degree murder in the killing of healthcare insurance CEO Brian Thompson in New York. The BBC has confirmed that it filed a complaint with Apple regarding the headline incident. The RSF has since argued that summaries of the type prove that "generative AI services are still too immature to produce reliable information for the public." Vincent Berthier, head of RSF's technology and journalism desk, said that "AIs are probability machines, and facts can't be decided by a roll of the dice." He called the automated production of false information "a danger to the public's right to reliable information." This isn't an isolated incident, either. The New York Times reportedly experienced a similar issue when Apple Intelligence incorrectly summarized an article about Israeli Prime Minister Benjamin Netanyahu, creating a notification claiming he had been arrested when the original article discussed an arrest warrant from the International Criminal Court. Apple's AI feature aims to reduce notification overload by condensing alerts into brief summaries, and is currently available on iPhone 15 Pro, iPhone 16 models, and select iPads and Macs running the latest operating system versions. The summarization feature is enabled by default, but users can manually disable it through their device settings. Apple has not yet commented on the controversy or indicated whether it plans to modify or remove the feature.
[7]
Apple Intelligence summary feature should be banned after Luigi Mangione error, says RSF
The Apple Intelligence summary feature should be banned after it falsely claimed that Luigi Mangione had shot himself, says Reporters Sans Frontières (RSF). The non-profit body advises the United Nations, Council of Europe, and other governmental agencies on issues relating to journalism and freedom of information ... The controversy began after the summary feature claimed the suspect in the killing of United Health CEO Brian Thompson had shot himself. "BBC News is the most trusted news media in the world," a BBC spokesperson said in a statement. "It is essential to us that our audiences can trust any information or journalism published in our name and that includes notifications." The BBC says it has contacted Apple "to raise this concern and fix this problem." Apple has still not commented on the problem. RSF has now issued a somewhat vague statement in which it could be calling for the outlawing of anything from the Apple Intelligence summary feature to the entirety of generative AI. Reporters Without Borders (RSF) is very concerned about the risks posed to media outlets by new artificial intelligence (AI) tools after a new Apple product generated a false news alert and attributed it to the BBC. This accident illustrates that generative AI services are still too immature to produce reliable information for the public, and should not be allowed on the market for such uses. RSF technology lead Vincent Berthier got slightly more specific in calling for Apple to act. AIs are probability machines, and facts can't be decided by a roll of the dice. RSF calls on Apple to act responsibly by removing this feature. The automated production of false information attributed to a media outlet is a blow to the outlet's credibility and a danger to the public's right to reliable information on current affairs. The BBC reports that Apple has not yet responded to its own complaint.
[8]
Apple urged to scrap AI feature after it creates false headline
A major journalism body has urged Apple to scrap its new generative AI feature after it created a misleading headline about a high-profile killing in the United States. The BBC made a complaint to the US tech giant after Apple Intelligence, which uses artificial intelligence (AI) to summarise and group together notifications, falsely created a headline about murder suspect Luigi Mangione. The AI-powered summary falsely made it appear that BBC News had published an article claiming Mangione, the man accused of the murder of healthcare insurance CEO Brian Thompson in New York, had shot himself. He has not. Now, the group Reporters Without Borders has called on Apple to remove the technology. Apple have made no comment.
[9]
Apple faces BBC complaint after its AI falsely claims Luigi Mangione shot himself
Facepalm: Not for the first time, we've seen another example of why letting generative AI take over every aspect of our lives would be a mistake. On this occasion, Apple Intelligence, which has a history of making significant errors, created a false headline. It claimed Luigi Mangione, the man arrested over the killing of UnitedHealthcare CEO Brian Thompson, had shot himself. The error led to the BBC contacting Apple, requesting that the issue be corrected. One of the features of Apple Intelligence is its ability to summarize notifications, which was first introduced in iOS 18.1. Cupertino says the summaries allow users to scan long or stacked notifications with key details right on the Lock Screen, such as when a group chat is particularly active. News headlines also appear in these condensed notifications. Unfortunately for users, they don't always show up correctly. The summary notification in question, published by BBC News, shows three headlines. The ones about the overthrow of Bashar al-Assad's regime in Syria and an update on South Korean President Yoon Suk Yeol are accurate. The part about Mangione shooting himself is not - he remains in police custody. The BBC complained to Apple about the factually incorrect headline, requesting that the company fix it. "BBC News is the most trusted news media in the world," a BBC spokesperson said. "It is essential to us that our audiences can trust any information or journalism published in our name and that includes notifications." This is far from the first time that Apple's notifications have gotton it wrong. In June, it summarized a report about the International Criminal Court issuing an arrest warrant for Israeli prime minister Benjamin Netanyahu with the headline "Netanyahu arrested," which he wasn't. Former Twitter platform X has been awash with stories of Apple Intelligence getting it wrong recently. From the AI summarizing a hike "that nearly killed me" as an "attempted suicide," to the Sun expected to strike Earth later in the week (causing delays to Amtrak customers south of Baltimore), generative AI technologies still requires human oversight in many instances. In addition to getting these sorts of things completely wrong, generative AI also has little appreciation for context. In October, a New York-based developer received a summary notification from his partner (on his birthday) that read: "No longer in a relationship; wants belongings from the apartment."
[10]
Apple AI Tells Users Luigi Mangione Has Shot Himself
"I am surprised that Apple put their name on such demonstrably half-baked product." Apple's generative AI should be making headlines. Instead, it's making them up. Just days after its launch in the UK, the tech company's Apple Intelligence model blurted out a totally fabricated headline about Luigi Mangione, the 26-year-old man who's been arrested for the murder of UnitedHealthcare CEO Brian Thompson earlier this month. As the BBC reports, the Apple AI feature incorrectly summarized the BBC's reporting to make it sound like the suspect had attempted suicide in an alert sent to iPhone users. "Luigi Mangione shoots himself," reads the AI's bogus BBC notification. It's yet another high-profile example of AI incorrectly reporting current events -- again raising serious questions about the technology's role as a mediator of information. A spokesperson from the BBC said the broadcaster has complained to Apple "to raise this concern and fix the problem." Apple has declined to comment publicly on the matter. "BBC News is the most trusted news media in the world," the BBC spokesperson said. "It is essential to us that our audiences can trust any information or journalism published in our name and that includes notifications." And this was no fluke. The report also identifies another fib by Apple Intelligence in its three-item news notifications. When summarizing a report from The New York Times last month about the International Criminal Court issuing an arrest warrant for Israeli prime minister Benjamin Netanyahu, the AI sent out a headline claiming: "Netanyahu arrested." Apple Intelligence, which debuted domestically in October, was finally released in the UK last Wednesday. It's safe to say that its news feature couldn't have gotten off to a worse start. Mangione is one of the most talked-about men on the planet right now. Anything he does is newsworthy. "I can see the pressure getting to the market first, but I am surprised that Apple put their name on such demonstrably half-baked product," Petros Iosifidis, a professor in media policy at City University in London, told the BBC. "Yes, potential advantages are there -- but the technology is not there yet and there is a real danger of spreading disinformation." However, this danger is one that's fundamental of generative AI, and not just Apple's flavor of it. AI models routinely hallucinate and make up facts. They have no understanding of language, but instead use statistical predictions to generate cogent-sounding text based on the human writing they've ingested. This introduces another confounding factor into reporting the news. Human journalists already make subjective decisions in how events are described. Then another decision must be made to decide how those events are to be further condensed into a concise headline. Now, tech companies want to interpose themselves into this process with a technology that only approximates the correct thing to say -- and we're already seeing the dumb consequences of it.
[11]
Reporters Without Borders calls for halt to Apple Intelligence news feature
A false Luigi Mangione news alert generated by the new Apple Intelligence feature launched in the UK has raised concerns about the tool. Reporters Without Borders (RSF) has called for a suspension of GenAI services like Apple Intelligence following a false Luigi Mangione news alert. Describing itself as "very concerned about the risks posed to media outlets" from new artificial intelligence (AI) tools like Apple Intelligence, RSF says the incident is a clear illustration that generative AI tools are still "too immature" to produce reliable information for the public, and should not be allowed on the market for this purpose. "AIs are probability machines, and facts can't be decided by a roll of the dice," said Vincent Berthier, head of RSF's technology and journalism desk. "RSF calls on Apple to act responsibly by removing this feature. The automated production of false information attributed to a media outlet is a blow to the outlet's credibility and a danger to the public's right to reliable information on current affairs." Berthier added that, despite being the most advanced legislation in the world in this area, The EU AI Act failed to classify information-generating AIs as high-risk systems, leaving a critical legal vacuum. "This gap must be filled immediately," he warned. Launched in the UK on 11 December, the BBC had announced a complaint to Apple a few days later after a news summary generated by the new AI feature falsely announced the suicide of Luigi Mangione, the main suspect in the murder of the CEO of UnitedHealthcare. As errors go, this was a pretty big one and according to RSF, it highlights the inability of AI systems to "systematically publish quality information, even when it is based on journalistic sources". "The probabilistic way in which AI systems operate automatically disqualifies them as a reliable technology for news media that can be used in solutions aimed at the general public," the RSF said in a statement. In 2023, given the risks to media caused by AI in the information space, the RSF launched the Paris Charter Initiative, which sets out 10 essential principles to guarantee the integrity of information and preserve journalism's role as a public service. "Rights holders must make the re-use of their content conditional on respect for the integrity of the information and the fundamental principles of journalistic ethics," the charter states. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[12]
BBC complains to Apple after its AI generates misleading headline about Luigi Mangione
Apple has been caught in hot water after its new generative AI software produced a factually incorrect headline that was discovered by the BBC. The BBC recently published a report stating it officially complained to Apple after an Apple Intelligence-generated headline of a BBC news story that was pushed out to iPhones last week contained misinformation. The headline stated Luigi Mangione, the man who was arrested for the murder of UnitedHealthcare CEO Brian Thomson, shot himself. The AI-powered summary was discovered by the BBC, which contacted Apple to raise the concern and "fix the problem". Notably, Apple's AI that generated the summary was otherwise correct in the other information it provided, such as South Korean President Yoon Suk Yeol having his office raided by police, and the overthrow of Bashar al-Assad's regime in Syria. However, the BBC isn't the only publication to have suffered from Apple's AI spitting out headlines that are factually incorrect, as three articles on different topics from the New York Times were distributed in one notification, and one of those headlines stated the Israeli prime minister, Benjamin Netanyahu was "arrested". The AI summarized a newspaper report stating that the International Criminal Court issued an arrest warrant for Netanyahu, not that he had been arrested. Apple has touted the new AI feature is designed to reduce the time spent interacting with notifications, giving a user more time to interact with notifications that are meaningful. Notably, these AI summaries are only being pushed out to iPhones using the iOS 18.1 system update or later, which is available on all iPhone 16 devices, as well as the 15 Pro and 15 Pro Max.
[13]
Apple Intelligence Generates False BBC Headline About UnitedHealthcare Shooter
The BBC has launched a complaint after Apple Intelligence generated a false headline about the death of the alleged shooter of UnitedHealthcare's CEO in New York last week. An AI summary generated by Apple's tool claimed the suspect, Luigi Mangione, had shot himself, despite Mangione being safe in police custody. Apple has yet to give a formal statement on the incident. "It is essential to us that our audiences can trust any information or journalism published in our name, and that includes notifications," said a BBC spokesperson. The BBC said that Apple was contacted to "raise this concern and fix the problem." The news follows a ProPublica journalist's report that Apple's AI tool had summarized several articles from The New York Times about Israeli Prime Minister Benjamin Netanyahu into one false headline claiming he had "been arrested," when no such arrest had occurred. Apple Intelligence became available on eligible devices with iOS 18.1 in late October, offering users the option to get AI-generated summaries of trending news stories. The tool launched earlier this week for users in the United Kingdom. This isn't the first time an AI tool offered by one of tech's largest platforms has come under fire for producing false AI summaries of major news stories. Twitter's built-in AI chatbot, Grok, came under scrutiny in April after falsely claiming that Indian Prime Minister Narendra Modi lost the election before the election had even happened. Google AI's Summaries tool has also been criticized for producing false answers, including suggesting users add glue to their pizza to help it stick better. Professor Petros Iosifidis, a media policy expert at City, University of London, told the BBC that he was surprised Apple "put their name on such a demonstrably half-baked product." He added: "Yes, potential advantages are there, but the technology is not there yet, and there is a real danger of spreading disinformation." Apple has previously discussed the dangers of AI hallucinations publically. In an interview with The Washington Post in June 2024, Apple CEO Tim Cook warned that Apple Intelligence might produce false results or inaccuracies. "It's not 100%," Cook said in June. "But I think we have done everything that we know to do, including thinking very deeply about the readiness of the technology in the areas that we're using it in."
[14]
Apple's AI Disastrously Rewrote a BBC Headline to Say Luigi Mangione Shot Himself
Luigi Mangione did not shoot himself, but a BBC headline rewritten by Apple iOS would make you believe so. Apple has only just begun rolling out a much-hyped suite of AI features for its devices, and we are already seeing major problems. Case in point, the BBC has complained to Apple after an AI-powered notification summary rewrote a BBC headline to say the UHC CEO's murderer Luigi Mangione had shot himself. Mangione did not shoot himself and remains in police custody. Apple Intelligence includes a feature on iOS that tries to relieve users of fatigue by bundling and summarizing notifications coming in from individual apps. For instance, if a user receives multiple text messages from one person, instead of displaying them all in a long list, iOS will now try and summarize the push alerts into one concise notification. It turns outâ€"and this should not surprise anyone familiar with generative AIâ€"the "intelligence" in Apple Intelligence belies the fact that the summaries are sometimes unfortunate or just plain wrong. Notification summaries were first introduced to iOS in version 18.1 which was released back in October; earlier this week, Apple added native integration with ChatGPT in Siri. In an article, the BBC shared a screenshot of a notification summarizing three different stories that had been sent as push alerts. The notification reads: "Luigi Mangione shoots himself; Syrian mother hopes Assad pays the price; South Korea police raid Yoon Suk Yeol's office." The other summaries were correct, the BBC says. The BBC has complained to Apple about the situation, which is embarrassing for the tech company but also risks damaging the reputation of news media if readers believe they are sending out misinformation. They have no control over how iOS decides to summarize their push alerts. "BBC News is the most trusted news media in the world," a BBC spokesperson said for the story. "It is essential to us that our audiences can trust any information or journalism published in our name and that includes notifications." Apple declined to respond to the BBC's questions about the snafu. Artificial Intelligence has a lot of potential in many areas, but language models are perhaps one of the worst implementations. But there is a lot of corporate hope that the technology will become good enough that enterprises could rely on it for uses like customer support chat or searching through large collections of internal data. But it's not there yetâ€"in fact, enterprises using AI have said they still have to do lots of editing of the work it produces. It feels somewhat uncharacteristic of Apple to deeply integrate such unreliable and unpredictable technology into its products. Apple has no control over ChatGPT's outputsâ€"the chatbot's creator OpenAI can barely control the language models, and their behavior is constantly being tweaked. Summaries of short notifications should be the easiest thing for AI to do well, and Apple is flubbing even that. At the very least, some of Apple Intelligence's features demonstrate how AI could potentially have practical uses. Better photo editing and a focus mode that understands which notifications should be sent through are nice. But for a company associated with polished experiences, wrong notification summaries and a hallucinating ChatGPT could make iOS feel unpolished. It feels like they are rushing on the hype train in order to juice new iPhone salesâ€"an iPhone 15 Pro or newer is required to use the features.
[15]
Apple Intelligence summary botches BBC News headline
Meanwhile, some iPhone users apathetic about introduction of AI features Things are not entirely going to plan for Apple's generative AI system, after the recently introduced service attracted the ire of the British Broadcasting Corporation. Apple Intelligence generated a headline of a BBC news story that popped up on iPhones late last week, claiming that Luigi Mangione, a man arrested over the murder of healthcare insurance CEO Brian Thomson, had shot himself. This summary was not true and sparked a complaint from the UK's national broadcaster. AI-generated content is prone to inaccuracies, and providers like Microsoft and OpenAI typically include disclaimers. Introducing a summary into a user's news feed without making it clear there is a chance it could be wrong is bad, but worse is attributing the inaccuracy elsewhere. A source at the BBC, who spoke to The Register on condition of anonymity, admitted that corporation had made its fair share of errors over the years, but said: "This one caused some jitters and has fed into a mood that AI-generated products can be a bad fit for news especially. Our Head of News is big on verify and truth, etc so [the] BBC will really want to make a fuss when this happens so everyone knows it's wrong and not our fault." The mistake comes as smartphone users show apathy to AI services being hoisted onto their devices. In a recent survey of 2,000 smartphone users (of which more than 1,000 had an iPhone capable of running Apple Intelligence), 73 percent of iPhone users said AI features added little or no value. A little more than one in ten believed AI features were "very valuable." More than half (54 percent) of iPhone users had used Apple Intelligence to generate notification summaries. Almost three-quarters (72 percent) had used the services' Writing Tools for tasks such as proofreading and summarizing. For context, it seems some Samsung users are even more blasé about AI. Eighty-seven percent said AI features added little or no value, despite the tech giant pumping them into devices. Apple Intelligence was launched in the UK in the last week. However, those hoping the megacorp's late entry to AI would be a little more polished may be disappointed by high-profile missteps such as the BBC's complaint. Apple Intelligence appears equally prone to errors as other AI platforms. ®
[16]
Luigi Mangione fake news could have been easily avoided by Apple Intelligence
Apple Intelligence managed to create a piece of Luigi Mangione fake news last week, thanks to the notification summary feature. It somehow decided that the suspect in the killing of United Health CEO Brian Thompson had shot himself. The mistake, in itself, is not surprising: AI systems make this kind of error all the time. What is rather more surprising is that Apple allowed it to happen when it could have been easily avoided ... Today's generative AI systems can often deliver impressive results, but they of course aren't actually intelligent - and that has seen them making some pretty spectacular mistakes. Many of these are amusing. There was the McDonalds drive-through AI system which kept adding chicken nuggets to customer orders until it hit a total of 260; Google reporting a geologist recommendation to eat one rock per day, and suggesting that we use glue to help cheese stick to pizza; and Microsoft recommending a food bank as a tourist destination. But there have been examples of dangerous AI advice. There was an AI-written book on mushroom foraging which recommended tasting mushrooms as a way to identify poisonous ones; mapping apps that directed people into wildfires; and the Boeing system which caused two airliners to crash, killing 346 people. The Apple Intelligence summary of a BBC News story was neither amusing nor dangerous, but was embarrassing. Apple Intelligence, launched in the UK earlier this week, external, uses artificial intelligence (AI) to summarise and group together notifications. This week, the AI-powered summary falsely made it appear BBC News had published an article claiming Luigi Mangione, the man arrested following the murder of healthcare insurance CEO Brian Thompson in New York, had shot himself. He has not. It wasn't the first time we've seen this - a previous Apple Intelligence notification summary claimed that Israeli prime minister Benjamin Netanyahu had been arrested, when the actual story was the ICC issuing a warrant for his arrest. It's impossible to avoid all these errors; it's simply in the nature of generative AI systems to make them. This is all the more true in the case of Apple's notifications summary of news headlines. Headlines are, by their very nature, very partial summaries of a story. Apple Intelligence is attempting to further condense a highly-condensed version of a news story; a very brief summary of a very brief summary. It's not at all surprising that this sometimes goes badly wrong. While Apple can't prevent this in general, it could at least prevent it happening on particularly sensitive stories. It could trap keywords like killing, killed, shooter, shooting, death, and so on, and flag those for human review before they are used. In this particular case, the error was simply embarrassing, but it's not at all hard to see how a mistake on a sensitive topic like this could lead to making a lot of people very angry. Imagine a summary which appears to blame the victims of a violent crime or disaster, for example. Of course, human review would be an additional task for the Apple News team, but Apple could get 24/7 dedicated checking for the cost of half a dozen employees working shifts. That seems a rather small expense on Apple's part to prevent what could be a major PR disaster for the still-fledgling feature.
[17]
Not everything is perfect with Apple: Its AI tool Apple Intelligence messes up big time, generates false alert that claimed Luigi Mangione shot himself
Tech giant Apple has come under scrutiny after its artificial intelligence (AI) service mistakenly generated a misleading alert attributed to BBC News.Tech giant Apple is under scrutiny following an incident where its artificial intelligence (AI) service generated a misleading BBC News alert. The alert falsely claimed that Luigi Mangione, a suspect in the murder of UnitedHealthcare CEO Brian Thompson, had shot himself. The erroneous notification has sparked concerns about the reliability of AI in delivering accurate news summaries. BBC subscribers in the UK were shocked this week when they received a push notification stating, "Luigi Mangione shoots himself." This message, generated by Apple Intelligence, was false. Mangione, 26, has not harmed himself and is currently in custody in Pennsylvania, awaiting extradition to New York on charges of murder. The misleading alert was sent alongside two other news summaries, which were accurate. However, the erroneous notification has drawn significant attention due to its sensitive nature and the implications of disseminating incorrect information. Apple Intelligence, an AI-powered service launched in the UK earlier this week, uses machine learning to summarize and group news notifications for users. In this case, the technology misinterpreted the context of the original story, erroneously generating the alert that implicated Luigi Mangione in a false act. The BBC, whose name was linked to the false alert, quickly clarified that it had not published such information. A spokesperson from the broadcaster stated that they had contacted Apple to address the issue and ensure such mistakes do not recur. "We take our credibility very seriously," the BBC spokesperson emphasized. "It is essential that our audiences trust the information associated with our name." While Apple declined to comment directly on the incident for the BBC's report, a company representative acknowledged that the AI service's error raised valid concerns about its potential to harm both its credibility and that of reputable news organizations. This is not the first time AI-powered summaries have faltered. Past errors include AI misinterpreting casual phrases like "that hike almost killed me" as "attempted suicide" or misconstruing a Ring camera report as a home invasion. Such incidents highlight the limitations and risks of relying on artificial intelligence for nuanced content interpretation. Luigi Mangione, charged with the murder of UnitedHealthcare CEO Brian Thompson in New York, has been the subject of intense media coverage. The high-profile nature of the case amplifies the fallout from the false alert. Disseminating incorrect information about such a case not only risks public misunderstanding but also undermines trust in both technology and journalism. The BBC, often hailed as the "most trusted news media in the world," expressed concern over how the error might impact its reputation. Luigi Mangione is a 26-year-old man accused of being involved in the murder of UnitedHealthcare CEO Brian Thompson. Mangione has been charged with the murder of Brian Thompson, which authorities describe as a premeditated attack outside a Hilton hotel.
[18]
BBC complains about incorrect Apple Intelligence notification summaries
The UK's BBC has complained about Apple's notification summarization feature in iOS 18 completely fabricating the gist of an article. Here's what happened, and why. The introduction of Apple Intelligence included summarization features, saving users time by offering key points of a document or a collection of notifications. On Friday, the summarization of notifications was a big problem for one major news outlet. The BBC has complained to Apple about how the summarization misinterprets news headlines and comes up with the wrong conclusion when producing summaries. A spokesperson said Apple was contacted to "raise this concern and fix the problem." In an example offered in its public complaint, a notification summarizing BBC News states "Luigi Mangione shoots himself," referring to the man arrested for the murder of UnitedHealthcare CEO Brian Thompson. Mangione, who is in custody, is very much alive. "It is essential to us that our audiences can trust any information or journalism published in our name and that includes notifications," said the spokesperson. Incorrect summarizations aren't just an issue for the BBC, as the New York Times has also fallen victim. In a Bluesky post about a November 21 summary, it claimed "Netanyahu arrested," however the story was really about the International Criminal Court issuing an arrest warrant for the Israeli prime minister. Apple declined to comment to the BBC. The instances of incorrect summaries are referred to as "hallucinations." This refers to when an AI model either comes up with not quite factual responses, even in the face of extremely clear sets of data, such as a news story. Hallucinations can be a big problem for AI services, especially in cases where consumers rely on getting a straightforward and simple answer to a query. It's also something that companies other than Apple also have to deal with. For example, early versions of Google's Bard AI, now Gemini, somehow combined Malcolm Owen the AppleInsider writer with the dead singer of the same name from the band The Ruts. Hallucinations can happen in models for a variety of reasons, such as issues with the training data or the training process itself, or a misapplication of learned patterns to new data. The model may also be lacking enough context in its data and prompt to offer a fully correct response, or make an incorrect assumption about the source data. It is unknown what exactly is causing the headline summarization issues in this instance. The source article was clear about the shooter, and said nothing about an attack on the man. This is a problem that Apple CEO Tim Cook understood was a potential issue at the time of announcing Apple Intelligence. In June, he acknowledged that it would be "short of 100%," but that it would still be "very high quality." In August, it was revealed that Apple Intelligence had instructions specifically to counter hallucinations, including the phrases "Do not hallucinate. Do not make up factual information." It is also unclear whether Apple will want to or be able to do much about the hallucinations, due to choosing not to monitor what users are actively seeing on their devices. Apple Intelligence prioritizes on-device processing where possible, a security measure that also means Apple won't get back much feedback for actual summarization results.
[19]
Apple Intelligence appears to have falsely claimed that Luigi Mangione shot himself
Apple Intelligence allegedly misled BBC News readers and BBC News isn't happy about it. In a story reported by BBC News itself, the outlet accused Apple's suite of AI features (which includes the ability to summarize news headlines in push notifications) of writing and sending out a blatantly false push notification to users. In this case, the push notification read that Luigi Mangione, recently arrested in connection with the shooting death of UnitedHealthcare executive Brian Thompson, had shot himself. That headline is false, and no such event has occurred at the time of publication. "Luigi Mangione shoots himself; Syrian mother hopes Assad pays the price; South Korea police raid Yoon Suk Yeol's office," the notification read in full. Apple Intelligence appears to have rounded up three separate news stories into one summary notification. Interestingly, only the Mangione one is incorrect; the others are accurate representations of the news stories they are referencing. BBC News has complained to Apple about this, but Apple has yet to comment on it. Apple Intelligence was introduced to iPhones and other Apple devices earlier this year, with the feature set being greatly expanded with the launch of iOS 18.2 earlier this week. If Apple is going to keep trying with AI, it might be prudent to clean up some of these issues before lawyers get involved. Mashable has reached out to Apple for comment and will update if we hear back.
[20]
BBC complains to Apple over misleading shooting headline
The BBC has not been able to independently verify the screenshot, and the New York Times did not provide comment to BBC News. Apple says one of the reasons people might like its AI-powered notification summaries is to help reduce the interruptions caused by ongoing notifications, and to allow the user to prioritise more important notices. It is only available on certain iPhones - those using the iOS 18.1 system version or later on recent devices (all iPhone 16 phones, the 15 Pro, and the 15 Pro Max). It is also available on some iPads and Macs. Prof Petros Iosifidis, a professor in media policy at City University in London, told BBC News the mistake by Apple "looks embarrassing". "I can see the pressure getting to the market first, but I am surprised that Apple put their name on such demonstrably half-baked product," he said. "Yes, potential advantages are there - but the technology is not there yet and there is a real danger of spreading disinformation." The grouped notifications are marked with a specific icon, and users can report any concerns they have on a notification summary on their devices. Apple has not outlined how many reports it has received. Apple Intelligence does not just summarise the articles of publishers, and it has been reported the summaries of emails and text messages have occasionally not quite hit the mark. And this is not the first time a big tech company has discovered AI summaries do not always work. In May, in what Google described as "isolated examples", its AI Overviews tool for internet searches told some users looking for how to make cheese stick to pizza should consider using "non-toxic glue". The search engine's AI-generated responses also said geologists recommend humans eat one rock per day.
[21]
Apple's AI summary mangled a BBC headline about Luigi Mangione
We've already seen our fair share of bad Apple Intelligence-summarized notifications, but now that the feature is live in the UK, the BBC isn't finding it so funny. The summarized notification mucked up a BBC headline about the UnitedHealthcare shooting suspect, falsely suggesting the network reported that Luigi Mangione shot himself. In a report about the notification, a spokesperson for the network says it contacted Apple "to raise this concern and fix the problem." Only the first part of the summarized BBC news notification is incorrect, as it accurately references two other stories about Bashar Al-Assad and a raid on the president of South Korea's office. Other examples of the AI summaries missing the mark that we've seen have turned "that hike almost killed me" into "attempted suicide" or a Ring camera appearing to report that people are surrounding someone's home. If you're getting too many summaries on your iPhone that don't make sense, you can change the list of apps your iPhone summarizes with Apple Intelligence by going to Settings > Notifications > Summarize Notifications or even choose to turn off the feature entirely.
[22]
Apple AI displays fake news headline, prompts BBC complaint
Earlier this week, the iPhone maker launched the " Intelligence" tool in the , which uses AI to summarize and group notifications on its mobile devices. A spokesman for the said it had contacted "to raise this concern and fix the problem." The technology company declined to comment on the matter. " is the most trusted news media in the world. It is essential to us that our audiences can trust any information or journalism published in our name and that includes notifications," a spokesperson for the corporation said.EFE
[23]
BBC says it has complained to Apple over AI-generated fake news attributed to broadcaster
Notifications from a new Apple product falsely suggested the BBC claimed the New York gunman Luigi Mangione had killed himself The BBC says it has filed a complaint with the US tech giant Apple over AI-generated fake news that was shared on iPhones and attributed to the broadcaster. The Apple Intelligence, which was launched in Britain this week, produces grouped notifications from several information sites that have been generated by artificial intelligence. One of those suggested that the BBC News website had published an article claiming that Luigi Mangione, who was arrested in the US over the murder of a healthcare executive in New York, had committed suicide. "BBC News is the most trusted news media in the world. It is essential to us that our audiences can trust any information or journalism published in our name and that includes notifications," a BBC spokesperson said in a statement. "We have contacted Apple to raise this concern and fix the problem." The BBC reported that a similar incident had occurred in relation to notifications attributed to the New York Times, though that was unconfirmed by the US publisher.
[24]
BBC complains to Apple over fake news AI notification
LONDON (AFP) - The BBC on Friday said it had filed a complaint with US tech giant Apple over AI-generated fake news that was shared on iPhones and attributed to the British public broadcaster. The Apple Intelligence, which was launched in Britain this week, produces grouped notifications from several information sites that have been generated by artificial intelligence. One of those suggested that the BBC News website had published an article claiming that Luigi Mangione, who was arrested in the US over the murder of a healthcare executive in New York, had committed suicide. "BBC News is the most trusted news media in the world. It is essential to us that our audiences can trust any information or journalism published in our name and that includes notifications," a BBC spokesperson said in a statement. "We have contacted Apple to raise this concern and fix the problem."
Share
Share
Copy Link
Apple faces criticism after its AI-powered news summary feature, Apple Intelligence, generates false headlines, prompting calls for its removal and raising concerns about AI reliability in news reporting.
Apple's recently launched AI-powered feature, Apple Intelligence, has come under intense scrutiny after generating false news summaries, including a shocking headline about a murder suspect. The incident has sparked a heated debate about the reliability of AI in news reporting and the potential dangers of misinformation 1.
On December 13, 2024, Apple Intelligence incorrectly summarized a BBC news notification, falsely claiming that Luigi Mangione, a suspect in the murder of UnitedHealthcare CEO Brian Thompson, had shot himself 2. This erroneous summary was quickly disseminated, causing confusion and concern among users and media professionals alike.
This is not an isolated incident. Apple Intelligence has a history of making significant errors in summarizing notifications. In a previous case, it falsely reported that Israeli Prime Minister Benjamin Netanyahu had been arrested, misinterpreting news about an International Criminal Court arrest warrant 3.
In response to these errors, Reporters Without Borders (RSF) has urged Apple to disable the notification summary feature entirely. Vincent Berthier, Head of RSF's Technology and Journalism Desk, stated, "AIs are probability machines, and facts can't be decided by a roll of the dice" 4.
The underlying issue appears to stem from the language models (LLMs) used by Apple Intelligence. These models statistically predict the next words based on vast datasets but lack inherent understanding of truth and context. This can result in plausible-sounding content that misrepresents facts 5.
This controversy highlights the challenges faced by news publishers in the age of AI. While AI tools can enhance efficiency, they also pose risks to the credibility of media outlets and the public's right to accurate information. The incident has reignited discussions about the need for human oversight in journalism and the potential dangers of over-relying on AI for content delivery 2.
The European AI Act, despite being considered advanced legislation in this area, did not classify information-generating AIs as high-risk systems. This oversight has left a critical legal vacuum that needs to be addressed urgently 1. As the debate intensifies, the incident serves as a cautionary tale about the current limitations of AI in handling sensitive information and the ongoing need for robust safeguards in AI-driven news delivery systems.
Reference
[1]
Apple's new AI feature for summarizing news notifications has come under fire for generating inaccurate and misleading headlines, raising concerns about the spread of misinformation and the need for improved AI content generation.
32 Sources
32 Sources
Apple has temporarily disabled its AI-generated news summary feature in the iOS 18.3 beta due to multiple instances of inaccurate information, raising concerns about AI hallucinations in news delivery.
38 Sources
38 Sources
Apple's new AI feature for summarizing notifications has garnered attention for its often amusing and sometimes alarming interpretations of user messages, highlighting both the potential and limitations of AI in everyday communication.
6 Sources
6 Sources
A BBC study finds that popular AI chatbots, including ChatGPT, Google Gemini, Microsoft Copilot, and Perplexity AI, produce significant errors when summarizing news articles, raising concerns about their reliability for news consumption.
2 Sources
2 Sources
Apple's rollout of Apple Intelligence, its AI suite, showcases a measured approach to AI integration. Despite initial limitations, it could normalize AI use and significantly impact user perceptions.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved