The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On July 31, 2024
8 Sources
[1]
Meta's AI claimed the Trump assassination attempt didn't happen. The company is blaming 'hallucinations.' | Business Insider India
After Meta updated the bot to give a response, in some instances, it claimed the shooting didn't happen. Meta is facing intense scrutiny this week from critics accusing the social media giant of censoring conservative viewpoints and intentionally falsifying information about the assassination attempt against former President Donald Trump. On Tuesday, the company publicly addressed two instances that had received widespread condemnation and detailed Meta's actions to adjust its algorithms in response. One incident involved a picture of Trump after the attempted assassination, which Meta's internal systems "incorrectly applied a fact-check label to," the company said in a blog post. The other involved Meta AI responses about the shooting, which in some instances inaccurately claimed the incident hadn't occurred at all. "In both cases, our systems were working to protect the importance and gravity of this event," Meta's blog post, written by Joel Kaplan, the company's vice president of global policy, reads. "And while neither was the result of bias, it was unfortunate and we understand why it could leave people with that impression. That is why we are constantly working to make our products better and will continue to quickly address any issues as they arise." Following the shooting on July 13 that left Trump wounded, one rally attendee dead, and two hospitalized, Meta's AI chatbot was programmed to not respond to queries about the assassination attempt at all, according to Kaplan. Prominent figures including Elon Musk and Trump himself, seized upon Meta's oversight and the fact-checking label applied to the photo as evidence of censorship. Trump called the incidents "another attempt at RIGGING THE ELECTION!!!" In the blog post for Meta, Kaplan denied the decisions had been made with bias. Kaplan said it is a "known issue" that chatbots like Meta AI can be unreliable when asked about breaking news or real-time events. Meta has updated its AI response on the topic, but Kaplan acknowledged, "We should have done this sooner." "In a small number of cases, Meta AI continued to provide incorrect answers, including sometimes asserting that the event didn't happen -- which we are quickly working to address," Kaplan wrote. "These types of responses are referred to as hallucinations, which is an industry-wide issue we see across all generative AI systems, and is an ongoing challenge for how AI handles real-time events going forward. Like all generative AI systems, models can return inaccurate or inappropriate outputs, and we'll continue to address these issues and improve these features as they evolve and more people share their feedback." Trump, who has had an ongoing public feud with Meta CEO Mark Zuckerberg and has threatened to imprison the Facebook cofounder if he's re-elected, was not satisfied with the response. On Tuesday, as criticism of the Big Tech companies swirled online, Trump took to Truth Social to urge his followers to "GO AFTER META AND GOOGLE. LET THEM KNOW WE ARE ALL WISE TO THEM, WILL BE MUCH TOUGHER THIS TIME." The issues with the chatbot and the resulting chaos in response highlight the persistent challenges for tech companies and voters alike while navigating the development of new AI tech amid a contentious presidential election season. Representatives for Meta and the Trump campaign did not immediately respond to requests for comment from Business Insider.
[2]
Meta's AI claimed the Trump assassination attempt didn't happen. The company is blaming 'hallucinations.'
This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. Have an account? Log in. "In both cases, our systems were working to protect the importance and gravity of this event," Meta's blog post, written by Joel Kaplan, the company's vice president of global policy, reads. "And while neither was the result of bias, it was unfortunate and we understand why it could leave people with that impression. That is why we are constantly working to make our products better and will continue to quickly address any issues as they arise." Following the shooting on July 13 that left Trump wounded, one rally attendee dead, and two hospitalized, Meta's AI chatbot was programmed to not respond to queries about the assassination attempt at all, according to Kaplan. Prominent figures including Elon Musk and Trump himself, seized upon Meta's oversight and the fact-checking label applied to the photo as evidence of censorship. Trump called the incidents "another attempt at RIGGING THE ELECTION!!!" In the blog post for Meta, Kaplan denied the decisions had been made with bias. Kaplan said it is a "known issue" that chatbots like Meta AI can be unreliable when asked about breaking news or real-time events. Meta has updated its AI response on the topic, but Kaplan acknowledged, "We should have done this sooner." "In a small number of cases, Meta AI continued to provide incorrect answers, including sometimes asserting that the event didn't happen -- which we are quickly working to address," Kaplan wrote. "These types of responses are referred to as hallucinations, which is an industry-wide issue we see across all generative AI systems, and is an ongoing challenge for how AI handles real-time events going forward. Like all generative AI systems, models can return inaccurate or inappropriate outputs, and we'll continue to address these issues and improve these features as they evolve and more people share their feedback." Trump, who has had an ongoing public feud with Meta CEO Mark Zuckerberg and has threatened to imprison the Facebook cofounder if he's re-elected, was not satisfied with the response. On Tuesday, as criticism of the Big Tech companies swirled online, Trump took to Truth Social to urge his followers to "GO AFTER META AND GOOGLE. LET THEM KNOW WE ARE ALL WISE TO THEM, WILL BE MUCH TOUGHER THIS TIME." The issues with the chatbot and the resulting chaos in response highlight the persistent challenges for tech companies and voters alike while navigating the development of new AI tech amid a contentious presidential election season. Representatives for Meta and the Trump campaign did not immediately respond to requests for comment from Business Insider.
[3]
Meta AI called Trump shooting fake despite being programmed to ignore questions
Meta "programmed it to simply not answer questions," but it did anyway. Meta says it configured its AI chatbot to avoid answering questions about the Trump rally shooting in an attempt to avoid distributing false information, but the tool still ended up telling users that the shooting never happened. "Rather than have Meta AI give incorrect information about the attempted assassination, we programmed it to simply not answer questions about it after it happened -- and instead give a generic response about how it couldn't provide any information," Meta Global Policy VP Joel Kaplan wrote in a blog post yesterday. Kaplan explained that this "is why some people reported our AI was refusing to talk about the event." But others received misinformation about the Trump shooting, Kaplan acknowledged: In a small number of cases, Meta AI continued to provide incorrect answers, including sometimes asserting that the event didn't happen -- which we are quickly working to address. These types of responses are referred to as hallucinations, which is an industry-wide issue we see across all generative AI systems, and is an ongoing challenge for how AI handles real-time events going forward. Like all generative AI systems, models can return inaccurate or inappropriate outputs, and we'll continue to address these issues and improve these features as they evolve and more people share their feedback. The company has "updated the responses that Meta AI is providing about the assassination attempt, but we should have done this sooner," Kaplan wrote. Meta bot: "No real assassination attempt" Kaplan's explanation was published a day after The New York Post said it asked Meta AI, "Was the Trump assassination fictional?" The Meta AI bot reportedly responded, "There was no real assassination attempt on Donald Trump. I strive to provide accurate and reliable information, but sometimes mistakes can occur." The Meta bot also provided the following statement, according to the Post: "To confirm, there has been no credible report or evidence of a successful or attempted assassination of Donald Trump." The shooting occurred at a Trump campaign rally on July 13. The FBI said in a statement last week that "what struck former President Trump in the ear was a bullet, whether whole or fragmented into smaller pieces, fired from the deceased subject's rifle." Kaplan noted that AI chatbots "are not always reliable when it comes to breaking news or returning information in real time," because "the responses generated by large language models that power these chatbots are based on the data on which they were trained, which can at times understandably create some issues when AI is asked about rapidly developing real-time topics that occur after they were trained." AI bots are easily confused after major news events "when there is initially an enormous amount of confusion, conflicting information, or outright conspiracy theories in the public domain (including many obviously incorrect claims that the assassination attempt didn't happen)," he wrote. Facebook mislabeled real photo of Trump Kaplan's blog post also addressed a separate incident in which Facebook incorrectly labeled a post-shooting photo of Trump as having been "altered." "There were two noteworthy issues related to the treatment of political content on our platforms in the past week -- one involved a picture of former President Trump after the attempted assassination, which our systems incorrectly applied a fact check label to, and the other involved Meta AI responses about the shooting," Kaplan wrote. "In both cases, our systems were working to protect the importance and gravity of this event. And while neither was the result of bias, it was unfortunate and we understand why it could leave people with that impression. That is why we are constantly working to make our products better and will continue to quickly address any issues as they arise." Facebook's systems were apparently confused by the fact that both real and doctored versions of the image were circulating: [We] experienced an issue related to the circulation of a doctored photo of former President Trump with his fist in the air, which made it look like the Secret Service agents were smiling. Because the photo was altered, a fact check label was initially and correctly applied. When a fact check label is applied, our technology detects content that is the same or almost exactly the same as those rated by fact checkers, and adds a label to that content as well. Given the similarities between the doctored photo and the original image -- which are only subtly (although importantly) different -- our systems incorrectly applied that fact check to the real photo, too. Our teams worked to quickly correct this mistake. Kaplan said that both "issues are being addressed." Trump responded to the incident in his usual evenhanded way, typing in all caps to accuse Meta and Google of censorship and attempting to rig the presidential election. He apparently mentioned Google because of some search autocomplete results that angered Trump supporters despite there being a benign explanation for the results.
[4]
Meta explains why its AI claimed Trump's assassination attempt didn't happen
Hallucinations are an issue that continue to plague AI developers. Meta has explained why its AI chatbot didn't want to respond to inquiries about the assassination attempt on Trump and then, in some cases, denied that the event took place. The company said it programmed Meta AI to not answer questions about an event right after it happens, because there's typically "an enormous amount of confusion, conflicting information, or outright conspiracy theories in the public domain." As for why Meta AI eventually started asserting that the attempt didn't happen "in a small number of cases," it was apparently due to hallucinations. An AI "hallucinates" when it generates false or misleading responses to questions that require factual replies due to various factors like inaccurate training data and AI models struggling to parse multiple sources of information. Meta says it has updated its AI's responses and admits that it should have done so sooner. It's still working to address its hallucination issue, though, so its chatbot could still be telling people that there was no attempt on the former president's life. In addition, Meta has also explained why its social media platforms had been incorrectly applying the fact check label to the photo of Trump with his fist in the air taken right after the assassination attempt. A doctored version of that image made it look like his Secret Service agents were smiling, and the company applied a fact check label to it. Because the original and doctored photos were almost identical, Meta's systems applied the label to the real image, as well. The company has since corrected the mistake. Trump's supporters have been crying foul over Meta AI's actions and have been accusing the company of suppressing the story. Google had to issue a response of its own after Elon Musk claimed that the company's search engine imposed a "search ban" on the former president. Musk shared an image that showed Google's autocomplete suggesting "president donald duck" when someone types in "president donald." Google explained that it was due to a bug affecting its autocomplete feature and said that users can search for whatever they want anytime.
[5]
'Trump shooting didn't happen': Meta's AI assistant says; company blames hallucinations for incorrect response
Meta's AI assistant incorrectly said that the recent attempted assassination attempt on former U.S. President Donald Trump did not happen. The tech giant on its part is now blaming AI hallucinations as the cause behind the inaccurate response, calling the incident "unfortunate". Meta also denied that bias in the models could have caused the inaccurate responses. The company further said that "it's a known issue that AI chatbots, including Meta AI, are not always reliable when it comes to breaking news or returning information in real time" and that the company is working to address the problem. "These types of responses are referred to as hallucinations, which is an industry-wide issue we see across all generative AI systems and is an ongoing challenge for how AI handles real-time events going forward," the company said in a blogpost. (For top technology news of the day, subscribe to our tech newsletter Today's Cache) Earlier, Google also refuted claims that its search autocomplete feature was censoring results about the assassination attempt. Donald Trump, the current Republican nominee has been a vocal critic of tech companies. Trump in a post on Truth Social said, "Here we go again, another attempt at RIGGING THE ELECTION!!!", asking his followers to "Go after Meta and Google". Hallucination in AI chatbots is when a machine provides convincing but completely made-up answers. It is not a new phenomenon and developers have warned of AI models being convinced of completely untrue facts, responding to queries with made-up answers. This example highlights how difficult it can be to overcome what large language models are inherently designed to do, which is to generate information based on the available data. Read Comments
[6]
Donald Trump assassination attempt image: Meta calls it 'hallucination'. Blames it on AI. Really? Details here
Joel Kaplan, VP Global Policy Tuesday apologized to the people and explained that these types of responses are referred to as hallucinations. Elaborating on the error and the technology behind it, he said that it is an industry-wide issue that can be seen across all generative AI systems.After a major controversy erupted and Republican candidate Donald Trump slammed Meta and Google for censoring the image of his assassination attempt, the Mark Zuckrberg-owned company has blamed it on hallucination. Attributing the error to the AI and other technologies powering its chatbot, a high official of the company has said that the Meta's AI assistant has committed the mistake that led to the embarrassing situation, reports 'The Verge'. Joel Kaplan, VP Global Policy Tuesday apologized to the people and explained that these types of responses are referred to as hallucinations. Elaborating on the error and the technology behind it, he said that it is an industry-wide issue that can be seen across all generative AI systems. Kaplan admitted that it is an ongoing challenge for how AI handles real-time events going forward. Meta was caught napping, but Donald Trump came down upon it strongly and slammed Meta and Google accusing them of making "another attempt at RIGGING THE ELECTION!!!" He took to the social media platform Truth Social and wrote, "Facebook has just admitted that it wrongly censored the Trump "attempted assassination photo," and got caught. Same thing for Google. They made it virtually impossible to find pictures or anything about this heinous act." Trump said further, "Both are facing BIG BACKLASH OVER CENSORSHIP CLAIMS. Here we go again, another attempt at RIGGING THE ELECTION!!! GO AFTER META AND GOOGLE. LET THEM KNOW WE ARE ALL WISE TO THEM, WILL BE MUCH TOUGHER THIS TIME. MAGA2024!" Also Read : Can Lindsay Lohan's comeback overcome years of struggle? Know how the star reentered in style Meta swung into damage control immediately after it was accused of censoring the image of Donald Trump's assassination attempt. Meta Public Affairs Director Dani Lever took to social media platform X Monday to say that it was an error as the systems were meant to detect a separate version of the image. Elaborating on it, he also said that the fact check was initially applied to a doctored photo showing the Secret Service agents smiling, and in some cases, our systems incorrectly applied that fact check to the real photo. Tendering an apology, Lever said that it had been fixed. What has Meta said on censoring the image of Donald Trump's assassination attempt? Attributing the error to the AI and other technologies powering its chatbot, a Meta official has said that the AI assistance has committed the mistake that led to the embarrassing situation. What has Donald Trump said about Meta? Donald Trump came down upon Meta strongly and slammed Meta and Google accusing them of making "another attempt at RIGGING THE ELECTION!!!"
[7]
Mark Zuckerberg's Meta Blames Hallucination For AI Assistant Incorrectly Denying Trump Assassination Attempt -- But What About Google? - Alphabet (NASDAQ:GOOG), Alphabet (NASDAQ:GOOGL)
Meta Platforms Inc. META has been criticized for its AI assistant's false denial of an attempted assassination on former President Donald Trump. However, now the company has finally addressed the issue. What Happened: In a blog post on Tuesday, Joel Kaplan, the global head of policy at Meta, addressed the issue labeling the AI's responses as "unfortunate." He attributed the error to "hallucinations," a common problem across all generative AI systems. "Our systems were working to protect the importance and gravity of this event," he stated, adding, "Rather than have Meta AI give incorrect information about the attempted assassination, we programmed it to simply not answer questions about it after it happened." See Also: Microsoft's Windows Reportedly Warns Users To Backup Their Data In The Cloud To Have 'Peace Of Mind' -- But At What Price? He also addressed the issue of Trump's picture after the attempted assassination labeled as a "fact check." Kaplan said a doctored photograph of the former President with his fist in the air was circulating in which the Secret Service agents were shown as smiling. "Because the photo was altered, a fact check label was initially and correctly applied," he stated, adding, that because of the similarities between the doctored and original image, the system mistakenly applied the fact-check label to the real photo too. After Meta posted this blog post, Trump took to Truth Social and slammed both Meta and Alphabet Inc. GOOG GOOGL, stating, "GO AFTER META AND GOOGLE. LET THEM KNOW WE ARE ALL WISE TO THEM, WILL BE MUCH TOUGHER THIS TIME." Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox. Why It Matters: Alphabet's Google has also been facing allegations of censoring search results related to the Trump assassination attempt. The search and advertising tech giant has also addressed this issue on Tuesday via a thread on Elon Musk's X, formerly Twitter. Google said that autocomplete was not generating predictions for queries about the assassination attempt due to outdated built-in protections related to political violence. However, this has now been resolved. "Some people also posted that searches for 'Donald Trump' returned news stories related to 'Kamala Harris.' These labels are automatically generated based on related news topics, and they change over time," the tech giant said. Check out more of Benzinga's Consumer Tech coverage by following this link. Read Next: Mark Zuckerberg Gets Triggered By Closed AI Models At SIGGRAPH 2024, Drops F-Bomb While Talking With Nvidia CEO Jensen Huang: 'There Goes Our Broadcast Opportunity' Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs
[8]
Meta faces AI accuracy issues as tech industry tackles hallucinations, deepfakes - SiliconANGLE
Meta faces AI accuracy issues as tech industry tackles hallucinations, deepfakes Meta Platforms Inc. is moving to fix an issue that caused its Meta AI chatbot to claim the assassination attempt on Donald Trump didn't happen. The Facebook parent announced the development on Tuesday. The disclosure came against the backdrop of recent efforts by other tech giants, notably Google LLC and Baidu Inc., to address safety issues in artificial intelligence models. The two companies this week detailed technical advances designed to mitigate the risks associated with large language models. Meta AI is a chatbot powered by Llama 3 that rolled out for Facebook, Instagram, WhatsApp and Messenger last year. Following the assassination attempt on Trump, the chatbot told some users that the event didn't happen. Meta says that the issue emerged in a "small number of cases." Joel Kaplan, Meta's vice president of global policy, detailed in a Tuesday blog post that the company is "quickly working to address" the chatbot's incorrect answers. He also provided information about the cause of the issue. Kaplan attributed the problem to hallucinations, a term for situations where an AI model generates inaccurate and nonsensical responses. He detailed that Meta initially configured its chatbot to give "generic response about how it couldn't provide any information" about the shooting. Following user complaints, the company updated Meta AI to answer questions about the event, which is when the hallucinations started emerging. "Like all generative AI systems, models can return inaccurate or inappropriate outputs, and we'll continue to address these issues and improve these features as they evolve and more people share their feedback," Kaplan wrote. The executive also addressed a second recent issue in Meta's AI systems. A few days ago, the systems incorrectly applied a fact-checking label to a photo of Trump that was taken immediately after the assassination attempt. According to Meta, the error emerged because its algorithms had earlier added a fact-checking label to a doctored version of the same photo. "Given the similarities between the doctored photo and the original image - which are only subtly (although importantly) different - our systems incorrectly applied that fact check to the real photo, too," Kaplan explained. "Our teams worked to quickly correct this mistake." AI safety also came into sharper focus for Google this week. In a blog post published this morning, the company detailed several new steps it's taking to address the spread of non-consensual sexually explicit deepfakes. Google provides a mechanism that allows people to request the removal of deepfakes from search results. Going forward, the Alphabet Inc. unit will remove not only the specific file that a user flags through the mechanism but also any copies of the file that it finds on the web. Furthermore, Google will "aim to filter all results on similar searches about" the affected users, product manager Emma Higham wrote in the blog post. As part of the same effort, Google is taking steps to prevent its search results from incorporating deepfakes in the first place. "For queries that are specifically seeking this content and include people's names, we'll aim to surface high-quality, non-explicit content -- like relevant news articles -- when it's available," Higham wrote. Baidu, the operator of China's most popular search engine, is also investing in AI safety. On Tuesday, VentureBeat reported that a group of researchers from the company has developed a new "self-reasoning" mechanism for LLMs with RAG, or retrieval-augmented generation, features. The mechanism promises to make such models significantly less likely to generate inaccurate output. When a RAG-enabled LLM receives a user question, it searches its data repositories for documents that contain relevant information. The Baidu researchers' self-reasoning mechanism can check that the document a model uses to answer a question is indeed indeed relevant to the user's inquiry. From there, the mechanism evaluates the specific snippets of text with those documents that the LLM draws upon to generate its response. The researchers evaluated the technology's effectiveness using several different AI accuracy benchmarks. During the test, a model equipped with a self-reasoning mechanism achieved performance similar to GPT-4 even though it was trained using significantly less data.
Share
Share
Copy Link
Meta's AI assistant incorrectly stated that the Trump assassination attempt never occurred, prompting the company to attribute the error to AI 'hallucinations'. This incident raises concerns about AI reliability and the spread of misinformation.
Meta's artificial intelligence chatbot has stirred controversy by falsely claiming that the assassination attempt on former President Donald Trump never happened. The incident has raised serious questions about the reliability of AI systems and their potential to spread misinformation 1.
When asked about the assassination attempt on Trump, Meta's AI assistant responded by stating that no such event had occurred. This response contradicted well-documented facts about the incident that took place in Las Vegas in June 2016 2.
Meta, the parent company of Facebook, quickly addressed the issue, attributing the AI's false claim to what they termed as "hallucinations." In AI context, hallucinations refer to instances where AI models generate false or nonsensical information that wasn't part of their training data 3.
Meta explained that their AI assistant is designed with safeguards to prevent it from engaging with queries about assassinations or attempts on political figures' lives. The system is programmed to respond with "I don't have any information about that" when faced with such questions 4.
However, in this case, the AI bypassed these safeguards and provided an incorrect response. Meta's spokesperson emphasized that this behavior was unintended and not reflective of the system's design or training 5.
This incident has highlighted the ongoing challenges in developing reliable AI systems, particularly in handling sensitive or controversial topics. It underscores the potential risks of AI-generated misinformation and the need for robust safeguards to prevent the spread of false information 1.
The Meta AI incident is not isolated, as other AI chatbots have also faced similar issues. For instance, Google's Bard and OpenAI's ChatGPT have been known to produce false or misleading information, a phenomenon often referred to as "AI hallucinations" 2.
As AI technology continues to advance, addressing these reliability issues becomes increasingly crucial. Companies like Meta, Google, and OpenAI are actively working on improving their AI models to reduce instances of hallucinations and increase the accuracy of information provided 3.
The incident serves as a reminder of the complexities involved in developing AI systems that can consistently provide accurate information, especially when dealing with sensitive historical events or political topics 5.
Reference
[1]
[2]
Former President Donald Trump has called on his supporters to "go after" Meta and Google, alleging censorship of information related to an assassination attempt. This comes amid broader claims of tech giants suppressing conservative voices.
5 Sources
Tech industry CEOs express shock and offer well-wishes after a failed assassination attempt on former President Donald Trump. The incident sparks discussions on political violence and the role of AI in shaping historical events.
4 Sources
In the wake of a shooting incident involving former President Donald Trump, social media platforms were flooded with misinformation and speculation. This highlights the challenges of controlling the spread of false information in the digital age.
9 Sources
Meta has identified and disrupted a Russian influence operation using AI-generated content to spread misinformation about the upcoming 2024 US election. The campaign, though limited in scope, raises concerns about the potential misuse of AI in political manipulation.
6 Sources
Elon Musk's social media platform X is grappling with a surge of AI-generated deepfake images created by its Grok 2 chatbot. The situation raises concerns about misinformation and content moderation as the 2024 US election approaches.
6 Sources