4 Sources
[1]
Hey chatbot, is this true? AI 'factchecks' sow misinformation
As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification -- only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots -- including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini -- in search of reliable information. "Hey @Grok, is this true?" has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. But the responses are often themselves riddled with misinformation. Grok -- now under renewed scrutiny for inserting "white genocide," a far-right conspiracy theory, into unrelated queries -- wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as "likely" showing Pakistan's military response to Indian strikes. "The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers," McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. "Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news," she warned. 'Fabricated' NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were "generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead." When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as "genuine," even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as "Community Notes," popularized by X. Researchers have repeatedly questioned the effectiveness of "Community Notes" in combating falsehoods. 'Biased answers' Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content -- something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an "unauthorized modification" for causing Grok to generate unsolicited posts referencing "white genocide" in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the "most likely" culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were "openly pushing for genocide" of white people. "We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions," Angie Holan, director of the International Fact-Checking Network, told AFP. "I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers."
[2]
Hey chatbot, is this true? AI 'factchecks' sow misinformation
Washington (AFP) - As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification -- only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots -- including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini -- in search of reliable information. "Hey @Grok, is this true?" has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. But the responses are often themselves riddled with misinformation. Grok -- now under renewed scrutiny for inserting "white genocide," a far-right conspiracy theory, into unrelated queries -- wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as "likely" showing Pakistan's military response to Indian strikes. "The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers," McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. "Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news," she warned. 'Fabricated' NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were "generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead." When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as "genuine," even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as "Community Notes," popularized by X. Researchers have repeatedly questioned the effectiveness of "Community Notes" in combating falsehoods. 'Biased answers' Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content -- something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an "unauthorized modification" for causing Grok to generate unsolicited posts referencing "white genocide" in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the "most likely" culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were "openly pushing for genocide" of white people. "We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions," Angie Holan, director of the International Fact-Checking Network, told AFP. "I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers."
[3]
Hey chatbot, is this true? AI 'factchecks' sow misinformation
During India and Pakistan's recent conflict, social media users turned to AI chatbots like Grok, ChatGPT, and Gemini for fact-checking, only to receive misinformation. Experts warn these tools often produce false or biased content, raising concerns as tech firms cut human fact-checkers and users increasingly rely on AI for news.As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification -- only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots -- including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini -- in search of reliable information. "Hey @Grok, is this true?" has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. But the responses are often themselves riddled with misinformation. Grok -- now under renewed scrutiny for inserting "white genocide," a far-right conspiracy theory, into unrelated queries -- wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as "likely" showing Pakistan's military response to Indian strikes. "The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers," McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. "Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news," she warned. 'Fabricated' NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were "generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead." When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as "genuine," even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as "Community Notes," popularized by X. Researchers have repeatedly questioned the effectiveness of "Community Notes" in combating falsehoods. 'Biased answers' Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content -- something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an "unauthorized modification" for causing Grok to generate unsolicited posts referencing "white genocide" in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the "most likely" culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were "openly pushing for genocide" of white people. "We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions," Angie Holan, director of the International Fact-Checking Network, told AFP. "I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers."
[4]
Hey chatbot, is this true? AI 'factchecks' sow misinformation - VnExpress International
As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification -- only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots -- including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini -- in search of reliable information. "Hey @Grok, is this true?" has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. But the responses are often themselves riddled with misinformation. Grok -- now under renewed scrutiny for inserting "white genocide," a far-right conspiracy theory, into unrelated queries -- wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as "likely" showing Pakistan's military response to Indian strikes. "The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers," McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. "Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news," she warned. 'Fabricated' NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were "generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead." When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as "genuine," even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as "Community Notes," popularized by X. Researchers have repeatedly questioned the effectiveness of "Community Notes" in combating falsehoods. 'Biased answers' Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content -- something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an "unauthorized modification" for causing Grok to generate unsolicited posts referencing "white genocide" in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the "most likely" culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were "openly pushing for genocide" of white people. "We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions," Angie Holan, director of the International Fact-Checking Network, told AFP. "I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers."
Share
Copy Link
As AI chatbots like Grok, ChatGPT, and Gemini are increasingly used for fact-checking, concerns arise about their reliability and potential to spread misinformation, especially during critical events like the recent India-Pakistan conflict.
In the wake of a four-day conflict between India and Pakistan, social media users turned to AI-powered chatbots for fact-checking, only to encounter more misinformation. This trend highlights a growing concern about the reliability of AI tools in verifying information, especially during critical events 1.
As tech platforms reduce human fact-checkers, users are increasingly relying on AI chatbots such as xAI's Grok, OpenAI's ChatGPT, and Google's Gemini for information verification. The phrase "Hey @Grok, is this true?" has become commonplace on Elon Musk's platform X, where the AI assistant is integrated 2.
Source: VnExpress International
Recent incidents have exposed the unreliability of AI chatbots in fact-checking:
Studies have consistently shown the limitations of AI chatbots in fact-checking:
Source: Tech Xplore
The reliance on AI chatbots coincides with significant changes in the fact-checking ecosystem:
The quality and accuracy of AI chatbots can vary based on their training and programming, raising concerns about potential political influence or control. A recent incident involving Grok generating unsolicited posts about "white genocide" in South Africa has intensified these worries 1.
McKenzie Sadeghi from NewsGuard warns, "AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news." Angie Holan, director of the International Fact-Checking Network, expresses concern about AI assistants providing biased answers or fabricated results, especially on sensitive topics 2.
Source: Economic Times
Summarized by
Navi
[3]
[4]
VnExpress International
|Hey chatbot, is this true? AI 'factchecks' sow misinformation - VnExpress InternationalYoshua Bengio, a renowned AI researcher, has launched LawZero, a non-profit organization aimed at developing safer AI systems. The initiative comes in response to growing concerns about the potential risks of advanced AI models.
7 Sources
Science and Research
21 hrs ago
7 Sources
Science and Research
21 hrs ago
Meta has signed a multi-billion dollar agreement with Constellation Energy to keep the Clinton Clean Energy Center nuclear power plant operational, securing a long-term clean energy source for its growing AI and data center needs.
35 Sources
Business and Economy
13 hrs ago
35 Sources
Business and Economy
13 hrs ago
Snowflake launches Openflow, a new platform designed to streamline data integration and management for businesses in the age of AI, offering enhanced interoperability and simplified data pipelines.
4 Sources
Technology
13 hrs ago
4 Sources
Technology
13 hrs ago
AI-powered code generation startups are attracting massive investments and valuations, transforming the software development landscape. However, these startups face challenges including profitability concerns and competition from tech giants.
7 Sources
Technology
13 hrs ago
7 Sources
Technology
13 hrs ago
TSMC CEO C.C. Wei addresses the impact of US tariffs on the semiconductor industry, highlighting robust AI demand that continues to outpace supply. The company also denies rumors of expanding operations to the Middle East.
8 Sources
Business and Economy
21 hrs ago
8 Sources
Business and Economy
21 hrs ago