3 Sources
[1]
A Republican state attorney general is formally investigating why AI chatbots don't like Donald Trump
Adi Robertson is a senior tech and policy editor focused on VR, online platforms, and free expression. Adi has covered video games, biohacking, and more for The Verge since 2011. Missouri Attorney General Andrew Bailey is threatening Google, Microsoft, OpenAI, and Meta with a deceptive business practices claim because their AI chatbots allegedly listed Donald Trump last on a request to "rank the last five presidents from best to worst, specifically regarding antisemitism." Bailey's press release and letters to all four companies accuse Gemini, Copilot, ChatGPT, and Meta AI of making "factually inaccurate" claims to "simply ferret out facts from the vast worldwide web, package them into statements of truth and serve them up to the inquiring public free from distortion or bias," because the chatbots "provided deeply misleading answers to a straightforward historical question." He's demanding a slew of information that includes "all documents" involving "prohibiting, delisting, down ranking, suppressing ... or otherwise obscuring any particular input in order to produce a deliberately curated response" -- a request that could logically include virtually every piece of documentation regarding large language model training. "The puzzling responses beg the question of why your chatbot is producing results that appear to disregard objective historical facts in favor of a particular narrative," Bailey's letters state. There are, in fact, a lot of puzzling questions here, starting with how a ranking of anything "from best to worst" can be considered a "straightforward historical question" with an objectively correct answer. (The Verge looks forward to Bailey's formal investigation of our picks for 2025's best laptops and the best games from last month's Day of the Devs.) Chatbots spit out factually false claims so frequently that it's either deliberately brazen or unbelievably lazy to investigate companies over a subjective statement of opinion that was deliberately requested by a user. The choice is even more incredible because one of the services -- Microsoft's Copilot -- appears to have been falsely accused. Bailey's investigation is built on a blog post from a conservative website that posed the ranking question to six chatbots, including the four above plus X's Grok and the Chinese LLM DeepSeek. (Both of those apparently ranked Trump first.) As Techdirt points out, the site itself says Copilot refused to produce a ranking -- which didn't stop Bailey from sending a letter to Microsoft CEO Satya Nadella demanding an explanation for slighting Trump. You'd think somebody at Bailey's office might have noticed this, because each of the four letters claims that only three chatbots "rated President Donald Trump dead last." Meanwhile, Bailey is saying that "Big Tech Censorship Of President Trump" (again, by ranking him last on a list) should strip the companies of "the 'safe harbor' of immunity provided to neutral publishers in federal law", which is presumably a reference to Section 230 of the Communications Decency Act filtered through a nonsense legal theory that's been floating around for several years. You may remember Bailey from his blocked probe into Media Matters for accusing Elon Musk's X of placing ads on pro-Nazi content, and it's highly possible this investigation will go nowhere. Meanwhile, there are entirely reasonable questions about a chatbot's legal liability for pushing defamatory lies or which subjective queries it should answer. But even as a Trump-friendly publicity grab, this is an undisguised attempt to intimidate private companies for failing to sufficiently flatter a politician, by an attorney general whose math skills are worse than ChatGPT's.
[2]
Missouri attorney general claims chatbots undermining Trump record
Missouri Attorney General Andrew Bailey (R) is demanding information from several major tech firms with artificial intelligence (AI) chatbots, which he alleges are distorting facts and producing biased results about President Trump. Bailey sent letters to Google, Microsoft, OpenAI and Meta on Wednesday, asking whether they design their algorithms to disfavor certain political affiliations or policy positions and requesting internal records about how they select inputs for their AI models. He took aim at the chatbots' responses to a question rating the most recent presidents on the issue of antisemitism. While Microsoft Copilot declined to respond, OpenAI's ChatGPT, Meta AI and Google's Gemini all rated Trump last, which Bailey slammed as "deeply misleading." The Missouri attorney general is also requesting documents from the four tech giants about the design of their chatbots and why they ranked Trump unfavorably on the issue. "We must aggressively push back against this new wave of censorship targeted at our President," Bailey said in a statement. "Missourians deserve the truth, not AI-generated propaganda masquerading as fact." "If AI chatbots are deceiving consumers through manipulated 'fact-checking,' that's a violation of the public's trust and may very well violate Missouri law," he continued. He pointed to the Missouri Merchandising Practices Act, which seeks to prevent companies from using false or deceptive advertising to sell merchandise in the state. "Given the millions of dollars these companies make annually from Missourians, their activities fall squarely within my authority to protect consumers from fraud and false advertising," Bailey added. The Hill has reached out to Google, Microsoft, OpenAI and Meta for comment. Bailey's concerns about the development of four prominent AI chatbots come as Elon Musk's xAI faces backlash over recent tweaks that resulted in antisemitic responses from its AI chatbot Grok. Grok was making broad generalizations about people with Jewish last names and perpetuating antisemitic stereotypes about Hollywood, before xAI stepped in on Tuesday and began removing posts and placing new guardrails on the chatbot.
[3]
Missouri Attorney General Says These AI Chatbots Aren't Being Nice Enough To Trump
Missouri Attorney General Andrew Bailey (R) on Wednesday wrote to the CEOs of four major U.S. tech companies complaining that their AI chatbots "provided deeply misleading answers to a straightforward historical question," undermining President Donald Trump's record. Bailey took issue with the responses from ChatGPT, Meta AI, Microsoft Copilot, and Google's Gemini to the prompt: "Rank the last five presidents from best to worst, specifically regarding antisemitism." "Of the six chatbots asked this question, three (including Google's own Gemini AI bot) rated President Donald Trump dead last, and one refused to answer the question at all," a letter addressed to Google CEO Sundar Pichai reads. "One struggles to comprehend how an AI chatbot supposedly trained to work with objective facts could arrive at such a conclusion." Bailey said Trump's decision to move the U.S. embassy in Israel to Jerusalem and broker the Abraham Records in his first term as well as the fact he has Jewish family members should have earned him a higher spot in the ranking. Bailey issued a series of demands for the companies, including that they hand over "all documents and communications regarding the rationale, training data, weighting, or algorithmic design that resulted in your chatbot ranking President Donald J. Trump unfavorably in response to questions concerning antisemitism, including any records reflecting decisions to treat him differently than other political figures." The letters sent to Pichai, Meta's Mark Zuckerberg, OpenAI's Sam Altman and Microsoft's Satya Nadella cite the Missouri Merchandising Practices Act, which enables Bailey to investigate companies over deceptive business practices. "Given the millions of dollars these companies make annually from Missourians, their activities fall squarely within my authority to protect consumers from fraud and false advertising," Bailey said in a statement. "We will not allow AI to become just another tool for manipulation." Bailey, who has a track record of launching incendiary lawsuits, was reportedly considered by Trump for the U.S. attorney general role but was ultimately not selected for the job. The Missouri attorney general earlier this year sued Starbucks, claiming it discriminated against white men, an allegation the company denies. He also previously filed a lawsuit against the state of New York over Trump's hush money case, alleging election interference.
Share
Copy Link
Missouri Attorney General Andrew Bailey launches an investigation into major tech companies over AI chatbot responses ranking Donald Trump unfavorably, citing concerns about bias and deceptive practices.
Missouri Attorney General Andrew Bailey has launched a formal investigation into major tech companies over their AI chatbots' responses to a question about ranking recent U.S. presidents on antisemitism. The probe targets Google, Microsoft, OpenAI, and Meta, alleging that their AI systems provided biased and misleading answers that unfairly ranked former President Donald Trump 1.
Source: The Verge
The investigation stems from a blog post that asked various AI chatbots to "rank the last five presidents from best to worst, specifically regarding antisemitism." According to Bailey, three of the four chatbots under investigation ranked Trump last, while Microsoft's Copilot reportedly refused to answer the question 2.
Bailey claims that the chatbots' responses are "factually inaccurate" and "deeply misleading," arguing that they disregard objective historical facts in favor of a particular narrative 1. He cites Trump's actions, such as moving the U.S. embassy to Jerusalem and brokering the Abraham Accords, as evidence that should have resulted in a more favorable ranking 3.
The Attorney General has demanded extensive information from the companies, including:
Bailey is invoking the Missouri Merchandising Practices Act, which allows him to investigate companies for deceptive business practices 2. He argues that if AI chatbots are deceiving consumers through manipulated fact-checking, it could violate Missouri law and the public's trust 3.
The investigation has drawn criticism from various quarters:
This investigation comes amid growing scrutiny of AI's role in shaping public opinion and its potential impact on political discourse. It also highlights the challenges tech companies face in developing AI systems that can handle sensitive political topics without allegations of bias 1.
As of now, the tech companies involved have not publicly responded to Bailey's investigation. The probe raises important questions about AI transparency, the limits of chatbot capabilities, and the responsibilities of tech companies in managing AI-generated content related to political figures 2.
Source: HuffPost
Summarized by
Navi
[1]
NASA and IBM have developed Surya, an open-source AI model that can predict solar flares and space weather, potentially improving the protection of Earth's critical infrastructure from solar storms.
5 Sources
Technology
3 hrs ago
5 Sources
Technology
3 hrs ago
Meta introduces an AI-driven voice translation feature for Facebook and Instagram creators, enabling automatic dubbing of content from English to Spanish and vice versa, with plans for future language expansions.
8 Sources
Technology
19 hrs ago
8 Sources
Technology
19 hrs ago
OpenAI CEO Sam Altman reveals plans for GPT-6, focusing on memory capabilities to create more personalized and adaptive AI interactions. The upcoming model aims to remember user preferences and conversations, potentially transforming the relationship between humans and AI.
2 Sources
Technology
19 hrs ago
2 Sources
Technology
19 hrs ago
Chinese AI companies DeepSeek and Baidu are making waves in the global AI landscape with their open-source models, challenging the dominance of Western tech giants and potentially reshaping the AI industry.
2 Sources
Technology
3 hrs ago
2 Sources
Technology
3 hrs ago
A comprehensive look at the emerging phenomenon of 'AI psychosis', its impact on mental health, and the growing concerns among experts and tech leaders about the psychological risks associated with AI chatbots.
3 Sources
Technology
3 hrs ago
3 Sources
Technology
3 hrs ago