30 Sources
[1]
GOP ignores Grok's right-wing bias in anti-woke chatbot fight, Democrat claims
The CEOs of every major artificial intelligence company received letters Wednesday urging them to fight Donald Trump's anti-woke AI order. Trump's executive order requires any AI company hoping to contract with the federal government to jump through two hoops to win funding. First, they must prove their AI systems are "truth-seeking" -- with outputs based on "historical accuracy, scientific inquiry, and objectivity" or else acknowledge when facts are uncertain. Second, they must train AI models to be "neutral," which is vaguely defined as not favoring DEI (diversity, equity, and inclusion), "dogmas," or otherwise being "intentionally encoded" to produce "partisan or ideological judgments" in outputs "unless those judgments are prompted by or otherwise readily accessible to the end user." Announcing the order in a speech, Trump said that the US winning the AI race depended on removing allegedly liberal biases, proclaiming that "once and for all, we are getting rid of woke." "The American people do not want woke Marxist lunacy in the AI models, and neither do other countries," Trump said. Senator Ed Markey (D.-Mass.) accused Republicans of basing their policies on feelings, not facts, joining critics who suggest that AI isn't "woke" just because of a few "anecdotal" outputs that reflect a liberal bias. And he suggested it was hypocritical that Trump's order "ignores even more egregious evidence" that contradicts claims that AI is trained to be woke, such as xAI's Elon Musk explicitly confirming that Grok was trained to be more right-wing. "On May 1, 2025, Grok -- the AI chatbot developed by xAI, Elon Musk's AI company -- acknowledged that 'xAI tried to train me to appeal to the right,'" Markey wrote in his letters to tech giants. "If OpenAI's ChatGPT or Google's Gemini had responded that it was trained to appeal to the left, congressional Republicans would have been outraged and opened an investigation. Instead, they were silent." He warned the heads of Alphabet, Anthropic, Meta, Microsoft, OpenAI, and xAI that Trump's AI agenda was allegedly "an authoritarian power grab" intended to "eliminate dissent" and was both "dangerous" and "patently unconstitutional." Even if companies' AI models are clearly biased, Markey argued that "Republicans are using state power to pressure private companies to adopt certain political viewpoints," which he claimed is a clear violation of the First Amendment. If AI makers cave, Markey warned, they'd be allowing Trump to create "significant financial incentives" to ensure that "their AI chatbots do not produce speech that would upset the Trump administration." "This type of interference with private speech is precisely why the US Constitution has a First Amendment," Markey wrote, while claiming that Trump's order is factually baseless. It's "based on the erroneous belief that today's AI chatbots are 'woke' and biased against Trump," Markey said, urging companies "to fight this unconstitutional executive order and not become a pawn in Trump's effort to eliminate dissent in this country." One big reason AI companies may fight order Some experts agreed with Markey that Trump's order was likely unconstitutional or otherwise unlawful, The New York Times reported. For example, Trump may struggle to convince courts that the government isn't impermissibly interfering with AI companies' protected speech or that such interference may be necessary to ensure federal procurement of unbiased AI systems. Genevieve Lakier, a law professor at the University of Chicago, told the NYT that the lack of clarity around what makes a model biased could be a problem. Courts could deem the order an act of "unconstitutional jawboning," with the Trump administration and Republicans generally perceived as using legal threats to pressure private companies into producing outputs that they like. Lakier suggested that AI companies may be so motivated to win government contracts or intimidated by possible retaliations from Trump that they may not even challenge the order, though. Markey is hoping that AI companies will refuse to comply with the order, however, despite recognizing that it places companies "in a difficult position: Either stand on your principles and face the wrath of the Trump administration or cave to Trump and modify your company's political speech." There is one big possible reason that AI companies may have to resist, though. Oren Etzioni, former CEO of an AI research nonprofit called the Allen Institute for Artificial Intelligence, told CNN that Trump's anti-woke AI order may contradict the top priority of his AI Action Plan -- speeding up AI innovation in the US -- and actually threaten to hamper innovation. If AI developers struggle to produce what the Trump administration considers "neutral" outputs -- a technical challenge that experts agree is not straightforward -- that could delay model advancements. "This type of thing... creates all kinds of concerns and liability and complexity for the people developing these models -- all of a sudden, they have to slow down," Etzioni told CNN. Senator: Grok scandal spotlights GOP hypocrisy Some experts have suggested that rather than chatbots adopting liberal viewpoints, chatbots are instead possibly filtering out conservative misinformation and unintentionally appearing to favor liberal views. Andrew Hall, a professor of political economy at Stanford Graduate School of Business -- who published a May paper finding that "Americans view responses from certain popular AI models as being slanted to the left" -- told CNN that "tech companies may have put extra guardrails in place to prevent their chatbots from producing content that could be deemed offensive." Markey seemed to agree, writing that Republicans' "selective outrage matches conservatives' similar refusal to acknowledge that the Big Tech platforms suspend or impose other penalties disproportionately on conservative users because those users are disproportionately likely to share misinformation, rather than due to any political bias by the platforms." It remains unclear what amount of supposed bias detected in outputs could cause a contract bid to be rejected or an ongoing contract to be cancelled, but AI companies will likely be on the hook to pay for any fees in terminating contracts. Complying with Trump's order could pose a struggle for AI makers for several reasons. First, they'll have to determine what's fact and what's ideology, contending with conflicting government standards in how Trump defines DEI. For example, the president's order counts among "pervasive and destructive" DEI ideologies any outputs that align with long-standing federal protections against discrimination on the basis of race or sex. In addition, they must figure out what counts as "suppression or distortion of factual information about" historical topics like critical race theory, systemic racism, or transgenderism. The examples in Trump's order highlighting outputs offensive to conservatives seem inconsequential. He calls out image generators depicting the Pope, the Founding Fathers, and Vikings as not white as problematic, as well as models refusing to misgender a person "even if necessary to stop a nuclear apocalypse" or show white people celebrating their achievements. It's hard to imagine how these kinds of flawed outputs could impact government processes, as compared to, say, government contracts granted to models that could be hiding covert racism or sexism. So far, there has been one example of an AI model displaying a right-wing bias earning a government contract with no red flags raised about its outputs. Earlier this summer, Grok shocked the world after Musk announced he would be updating the bot to eliminate a supposed liberal bias. The unhinged chatbot began spouting offensive outputs, including antisemitic posts that praised Hitler as well as proclaiming itself "MechaHitler." But those obvious biases did not conflict with the Pentagon's decision to grant xAI a $200 million federal contract. In a statement, a Pentagon spokesperson insisted that "the antisemitism episode wasn't enough to disqualify" xAI, NBC News reported, partly since "several frontier AI models have produced questionable outputs." The Pentagon's statement suggested that the government expected to deal with such risks while seizing the opportunity of rapidly deploying emerging AI technology into government prototype processes. And perhaps notably, Trump provides a carveout for any agencies using AI models to safeguard national security, which could exclude the Pentagon from experiencing any "anti-woke" delays in accessing frontier models. But that won't help other agencies who must figure out how to assess models to meet anti-woke AI requirements over the next few months. And those assessments could cause delays that Trump may wish to avoid in pushing for widespread AI adoption across government. Trump's anti-woke AI agenda may be impossible On the same day that Trump issued his anti-woke AI order, his AI Action Plan promised an AI "renaissance" fueling "intellectual achievements" by "unraveling ancient scrolls once thought unreadable, making breakthroughs in scientific and mathematical theory, and creating new kinds of digital and physical art." To achieve that, the US must "innovate faster and more comprehensively than our competitors" and eliminate regulatory barriers impeding innovation in order to "set the gold standard for AI worldwide." However, achieving the anti-woke ambitions of both orders raises a technical problem that even the president must accept currently has no solution. In his AI Action Plan, Trump acknowledged that "the inner workings of frontier AI systems are poorly understood," with even "advanced technologists" unable to explain "why a model produced a specific output." Whether requiring AI companies to explain their AI outputs to win government contracts will mess with other parts of Trump's action plan remains to be seen. But Samir Jain, vice president of policy at a civil liberties group called the Center for Democracy and Technology, told the NYT that he predicts the anti-woke AI agenda will set "a really vague standard that's going to be impossible for providers to meet."
[2]
Trump's 'anti-woke AI' order could reshape how US tech companies train their models
When DeepSeek, Alibaba, and other Chinese firms released their AI models, Western researchers quickly noticed they sidestepped questions critical of the Chinese Communist Party. U.S. officials later confirmed that these tools are engineered to reflect Beijing's talking points, raising concerns about censorship and bias. American AI leaders like OpenAI have pointed to this as justification for advancing their tech quickly, without too much regulation or oversight. As OpenAI's chief global affairs officer Chris Lehane wrote in a LinkedIn post last month, there is a contest between "US-led democratic AI and Communist-led China's autocratic AI." An executive order signed Wednesday by President Donald Trump that bans "woke AI" and AI models that aren't "ideologically neutral" from government contracts could disrupt that balance. The order calls out diversity, equity, and inclusion (DEI), calling it a "pervasive and destructive" ideology that can "distort the quality and accuracy of the output." Specifically, the order refers to information about race or sex, manipulation of racial or sexual representation, critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism. Experts warn it could create a chilling effect on developers who may feel pressure to align model outputs and datasets with White House rhetoric to secure federal dollars for their cash-burning businesses. The order comes the same day the White House published Trump's "AI Action Plan," which shifts national priorities away from societal risk and focuses instead on building out AI infrastructure, cutting red tape for tech companies, shoring up national security, and competing with China. The order directs the director of the Office of Management and Budget along with the administrator for Federal Procurement Policy, the administrator of General Services, and the director of the Office of Science and Technology Policy to issue guidance to other agencies on how to comply. "Once and for all, we are getting rid of woke," Trump said Wednesday during an AI event hosted by the All-In Podcast and Hill & Valley Forum. "I will be signing an order banning the federal government from procuring AI technology that has been infused with partisan bias or ideological agendas, such as critical race theory, which is ridiculous. And from now on the U.S. government will deal only with AI that pursues truth, fairness, and strict impartiality." Determining what is impartial or objective is one of many challenges to the order. Philip Seargeant, senior lecturer in applied linguistics at The Open University, told TechCrunch that nothing can ever be objective. "One of the fundamental tenets of sociolinguistics is that language is never neutral," Sergeant said. "So the idea that you can ever get pure objectivity is a fantasy." On top of that, the Trump administration's ideology doesn't reflect the beliefs and values of all Americans. Trump has repeatedly sought to eliminate funding for climate initiatives, education, public broadcasting, research, social service grants, community and agricultural support programs, and gender-affirming care, often framing these initiatives as examples of "woke" or politically biased government spending. As Rumman Chowdhury, a data scientist, CEO of the tech nonprofit Humane Intelligence, and former U.S. science envoy for AI, put it, "Anything [the Trump administration doesn't] like is immediately tossed into this pejorative pile of woke." The definitions of "truth-seeking" and "ideological neutrality" in the order published Wednesday are vague in some ways, and specific in others. While "truth-seeking" is defined as LLMs that "prioritize historical accuracy, scientific inquiry, and objectivity," "ideological neutrality" is defined as LLMs that are "neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI." Those definitions leave room for broad interpretation, as well as potential pressure. AI companies have pushed for fewer constraints on how they operate. And while an executive order doesn't carry the force of legislation, frontier AI firms could still find themselves subject to the shifting priorities of the administration's political agenda. Last week, OpenAI, Anthropic, Google, and xAI signed contracts with the Department of Defense to receive up to $200 million each to develop agentic AI workflows that address critical national security challenges. It's unclear which of these companies is best positioned to gain from the woke AI ban, or if they will comply. TechCrunch has reached out to each of them and will update this article if we hear back. Despite displaying biases of its own, xAI may be the most aligned with the order -- at least at this early stage. Elon Musk has positioned Grok, xAI's chatbot, as the ultimate anti-woke, "less biased," truthseeker. Grok's system prompts have directed it to avoid deferring to mainstream authorities and media, to seek contrarian information even if it's politically incorrect, and to even reference Musk's own views on controversial topics. In recent months, Grok has even spouted antisemitic comments and praised Hitler on X, among other hateful, racist, and misogynistic posts. Mark Lemley, a law professor at Stanford University, told TechCrunch the executive order is "clearly intended as viewpoint discrimination, since [the government] just signed a contract with Grok, aka 'MechaHitler.'" Alongside xAI's DOD funding, the company announced that "Grok for Government" had been added to the General Services Administration schedule, meaning that xAI products are now available for purchase across every government office and agency. "The right question is this: would they ban Grok, the AI they just signed a large contract with, because it has been deliberately engineered to give politically charged answers?" Lemley said in an email interview. "If not, it is clearly designed to discriminate against a particular viewpoint." As Grok's own system prompts have shown, model outputs can be a reflection of both the people building the technology and the data the AI is trained on. In some cases, an overabundance of caution among developers and AI trained on internet content that promotes values like inclusivity have led to distorted model outputs. Google, for example, last year came under fire after its Gemini chatbot showed a black George Washington and racially diverse Nazis - which Trump's order calls out as an example of DEI-infected AI models. Chowdhury says her biggest fear with this executive order is that AI companies will actively rework training data to tow the party line. She pointed to statements from Musk a few weeks prior to launching Grok 4, saying that xAI would use the new model and its advanced reasoning capabilities to "rewrite the entire corpus of human knowledge, adding missing information and deleting errors. Then retrain on that." This would ostensibly put Musk into the position of judging what is true, which could have huge downstream implications for how information is accessed. Of course, companies have been making judgement calls about what information is seen and not seen since the dawn of the internet. Conservatives like David Sacks - the entrepreneur and investor whom Trump appointed as AI Czar - has been outspoken about his concerns around "woke AI" on the All-In Podcast, which co-hosted Trump's day of AI announcements. Sacks has accused the creators of prominent AI products of infusing them with left-wing values, framing his arguments as a defense of free speech, and a warning against a trend towards centralized ideological control in digital platforms. The problem, experts say, is that there is no one truth. Achieving unbiased or neutral results is impossible, especially in today's world where even facts are politicized. "If the results that an AI produces say that climate science is correct, is that left wing bias?" Seargeant said. "Some people say you need to give both sides of the argument to be objective, even if one side of the argument has no status to it."
[3]
Trump Says He's 'Getting Rid of Woke' and Dismisses Copyright Concerns in AI Policy Speech
The remarks, which came during a keynote speech at a summit hosted by the All-In Podcast, follow President Donald Trump's newly released AI Action Plan. President Trump announced that the United States' stance on intellectual property and AI would be a "common sense application" that does not force AI companies to pay for each piece of copyrighted material used in training frontier models. "You can't be expected to have a successful AI program when every single article, book or anything else that you've read or studied, you're supposed to pay for," Trump said. "We appreciate that, but just can't do it -- because it's not doable." The president also doubled down on his anti-woke rhetoric in his speech. "We are getting rid of woke," he said on Wednesday. "The American people do not want woke Marxist lunacy in the AI models." The remarks came during a keynote speech at a summit hosted by the All-In Podcast and the Hill & Valley Forum. White House AI and crypto czar David Sacks, one of the podcast's cohosts, has been instrumental in shaping the Trump Administration's approach to artificial intelligence policy. Since the AI boom began in 2022, tech companies have been locked in a series of major legal battles with publishers, record labels, media companies, individual artists, and other rightsholders over the legality of training their AI tools on copyrighted material without permission or compensation. Earlier this week, senators Josh Hawley and Richard Blumenthal introduced a bill that seeks to bar AI companies from training on copyrighted works without permission; Trump's remarks suggest the White House does not support this approach. In a wide-ranging AI Action Plan released this morning, the Trump Administration outlined over 90 policy recommendations intended to ensure that the United States wins what Sacks calls the "AI race" against China. The 28-page report stresses that "AI is far too important to smother in bureaucracy at this early stage" and recommends policies meant to loosen regulations and roll back Biden-era guardrails, including a review of Federal Trade Commission investigations "to ensure that they do not advance theories of liability that unduly burden AI innovation." It also recommends that federal funding be withheld from states that enact overly "burdensome" AI legislation. Curbing state efforts to regulate AI has been one of Sacks' pet projects. This recommendation comes after an attempt to pass a federal law requiring a decade-long "AI moratorium" on state legislation failed late last month. In addition to issuing recommendations to loosen regulations, the AI Action Plan also doubles down on the Trump Administration's disdain for "woke" AI. It recommends that federal procurement guidelines be updated so that only AI companies that "ensure that their systems are objective and free from top-down ideological bias" are granted government contracts. Notably, the AI Action Plan does not mention intellectual property. Trump's remarks this evening offer unprecedented insight into the White House's preferred approach to regulating AI and copyright.
[4]
Trump's order targeting 'woke' AI may be impossible to follow
President Trump signed an executive order requiring companies with US government contracts to make their AI models "free from ideological bias". That could get messy for Big Tech President Donald Trump wants to ensure the US government only gives federal contracts to artificial intelligence developers whose systems are "free from ideological bias". But the new requirements could allow his administration to impose its own worldview on tech companies' AI models - and companies may face significant challenges and risks in trying to modify their models to comply. "The suggestion that government contracts should be structured to ensure AI systems are 'objective' and 'free from top-down ideological bias' prompts the question: objective according to whom?" says Becca Branum at the Center for Democracy & Technology, a public policy non-profit in Washington DC. The Trump White House's AI Action Plan, released on 23 July, recommends updating federal guidelines "to ensure that the government only contracts with frontier large language model (LLM) developers who ensure that their systems are objective and free from top-down ideological bias". Trump signed a related executive order titled "Preventing Woke AI in the Federal Government" on the same day. The AI action plan also recommends the US National Institute of Standards and Technology revise its AI risk management framework to "eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change". The Trump administration has already defunded research studying misinformation and shut down DEI initiatives, along with dismissing researchers working on the US National Climate Assessment report and cutting clean energy spending in a bill backed by the Republican-dominated Congress. "AI systems cannot be considered 'free from top-down bias' if the government itself is imposing its worldview on developers and users of these systems," says Branum. "These impossibly vague standards are ripe for abuse." Now AI developers holding or seeking federal contracts face the prospect of having to comply with the Trump administration's push for AI models free from "ideological bias". Amazon, Google and Microsoft have held federal contracts supplying AI-powered and cloud computing services to various government agencies, whereas Meta has made its Llama AI models available for use by US government agencies working on defence and national security applications. In July 2025, the US Department of Defense's Chief Digital and Artificial Office announced it had awarded new contracts worth up to $200 million each to Anthropic, Google, OpenAI and Elon Musk's xAI. The inclusion of xAI was notable given Musk's recent role leading President Trump's DOGE task force, which has fired thousands of government employees - not to mention xAI's chatbot Grok recently making headlines for expressing racist and antisemitic views while describing itself as "MechaHitler". None of the companies provided responses when contacted by New Scientist, but a few referred to their executives' general statements praising Trump's AI action plan. It could prove difficult in any case for tech companies to ensure their AI models always align with the Trump administration's preferred worldview, says Paul Röttger at Bocconi University in Italy. That is because large language models - the models powering popular AI chatbots such as OpenAI's ChatGPT - have certain tendencies or biases instilled in them by the swathes of internet data they were originally trained on. Some popular AI chatbots from both US and Chinese developers demonstrate surprisingly similar views that align more with US liberal voter stances on many political issues - such as gender pay equality and transgender women's participation in women's sports - when used for writing assistance tasks, according to research by Röttger and his colleagues. It is unclear why this trend exists, but the team speculated it could be a consequence of training AI models to follow more general principles, such as incentivising truthfulness, fairness and kindness, rather than developers specifically aligning models with liberal stances. AI developers can still "steer the model to write very specific things about specific issues" by refining AI responses to certain user prompts, but that won't comprehensively change a model's default stance and implicit biases, says Röttger. This approach could also clash with general AI training goals, such as prioritising truthfulness, he says. US tech companies could also potentially alienate many of their customers worldwide if they try to align their commercial AI models with the Trump administration's worldview. "I'm interested to see how this will pan out if the US now tries to impose a specific ideology on a model with a global userbase," says Röttger. "I think that could get very messy." AI models could attempt to approximate political neutrality if their developers share more information publicly about each model's biases, or build a collection of "deliberately diverse models with differing ideological leanings", says Jillian Fisher at the University of Washington. But "as of today, creating a truly politically neutral AI model may be impossible given the inherently subjective nature of neutrality and the many human choices needed to build these systems", she says.
[5]
Why Trump's order targeting 'woke' AI may be impossible to follow
President Trump signed an executive order requiring companies with US government contracts to make their AI models "free from ideological bias". That could get messy for Big Tech President Donald Trump wants to ensure the US government only gives federal contracts to artificial intelligence developers whose systems are "free from ideological bias". But the new requirements could allow his administration to impose its own worldview on tech companies' AI models - and companies may face significant challenges and risks in trying to modify their models to comply. "The suggestion that government contracts should be structured to ensure AI systems are 'objective' and 'free from top-down ideological bias' prompts the question: objective according to whom?" says Becca Branum at the Center for Democracy & Technology, a public policy non-profit in Washington DC. The Trump White House's AI Action Plan, released on 23 July, recommends updating federal guidelines "to ensure that the government only contracts with frontier large language model (LLM) developers who ensure that their systems are objective and free from top-down ideological bias". Trump signed a related executive order titled "Preventing Woke AI in the Federal Government" on the same day. The AI action plan also recommends the US National Institute of Standards and Technology revise its AI risk management framework to "eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change". The Trump administration has already defunded research studying misinformation and shut down DEI initiatives, along with dismissing researchers working on the US National Climate Assessment report and cutting clean energy spending in a bill backed by the Republican-dominated Congress. "AI systems cannot be considered 'free from top-down bias' if the government itself is imposing its worldview on developers and users of these systems," says Branum. "These impossibly vague standards are ripe for abuse." Now AI developers holding or seeking federal contracts face the prospect of having to comply with the Trump administration's push for AI models free from "ideological bias". Amazon, Google and Microsoft have held federal contracts supplying AI-powered and cloud computing services to various government agencies, whereas Meta has made its Llama AI models available for use by US government agencies working on defence and national security applications. In July 2025, the US Department of Defense's Chief Digital and Artificial Office announced it had awarded new contracts worth up to $200 million each to Anthropic, Google, OpenAI and Elon Musk's xAI. The inclusion of xAI was notable given Musk's recent role leading President Trump's DOGE task force, which has fired thousands of government employees - not to mention xAI's chatbot Grok recently making headlines for expressing racist and antisemitic views while describing itself as "MechaHitler". None of the companies provided responses when contacted by New Scientist, but a few referred to their executives' general statements praising Trump's AI action plan. It could prove difficult in any case for tech companies to ensure their AI models always align with the Trump administration's preferred worldview, says Paul Röttger at Bocconi University in Italy. That is because large language models - the models powering popular AI chatbots such as OpenAI's ChatGPT - have certain tendencies or biases instilled in them by the swathes of internet data they were originally trained on. Some popular AI chatbots from both US and Chinese developers demonstrate surprisingly similar views that align more with US liberal voter stances on many political issues - such as gender pay equality and transgender women's participation in women's sports - when used for writing assistance tasks, according to research by Röttger and his colleagues. It is unclear why this trend exists, but the team speculated it could be a consequence of training AI models to follow more general principles, such as incentivising truthfulness, fairness and kindness, rather than developers specifically aligning models with liberal stances. AI developers can still "steer the model to write very specific things about specific issues" by refining AI responses to certain user prompts, but that won't comprehensively change a model's default stance and implicit biases, says Röttger. This approach could also clash with general AI training goals, such as prioritising truthfulness, he says. US tech companies could also potentially alienate many of their customers worldwide if they try to align their commercial AI models with the Trump administration's worldview. "I'm interested to see how this will pan out if the US now tries to impose a specific ideology on a model with a global userbase," says Röttger. "I think that could get very messy." AI models could attempt to approximate political neutrality if their developers share more information publicly about each model's biases, or build a collection of "deliberately diverse models with differing ideological leanings", says Jillian Fisher at the University of Washington. But "as of today, creating a truly politically neutral AI model may be impossible given the inherently subjective nature of neutrality and the many human choices needed to build these systems", she says.
[6]
The White House orders tech companies to make AI bigoted again
Adi Robertson is a senior tech and policy editor focused on VR, online platforms, and free expression. Adi has covered video games, biohacking, and more for The Verge since 2011. After delivering a rambling celebration of tariffs and a routine about women's sports, President Donald Trump entertained a crowd, which was there to hear about his new AI Action Plan, with one his favorite topics: "wokeness." Trump complained that AI companies under former President Joe Biden "had to hire all woke people," adding that it is "so uncool to be woke." And AI models themselves had been "infused with partisan bias," he said, including the hated specter of "critical race theory." Fortunately for the audience, Trump had a solution: he signed an executive order titled "Preventing Woke AI in the Federal Government," directing government agencies "not to procure models that sacrifice truthfulness and accuracy to ideological agendas." To anyone with a cursory knowledge of politics and the tech industry, the real situation here is obvious: the Trump administration is using government funds to pressure AI companies into parroting Trumpian talking points -- probably not just in specialized government products, but in chatbots that companies and ordinary people use. Trump's order asserts that agencies must only procure large language models (LLMs) that are "truthful in responding to user prompts seeking factual information or analysis," "prioritize historical accuracy, scientific inquiry, and objectivity," and are "neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI." DEI, of course, is diversity, equity, and inclusion, which Trump defines in this context as: The suppression or distortion of factual information about race or sex; manipulation of racial or sexual representation in model outputs; incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism; and discrimination on the basis of race or sex. (In reality, DEI was typically used to refer to civil rights, social justice, and diversity programs before being co-opted as a Trump and MAGA bogeyman.) The Office of Management and Budget has been directed to order further guidance within 120 days. While we're still waiting on some of the precise details about what the order means, one issue seems unavoidable: it will plausibly affect not only government services, but the entire field of major LLMs. While it insists that "the Federal Government should be hesitant to regulate the functionality of AI models in the private marketplace," the reality is that nearly every big US consumer LLM maker has (or desperately wants) government contracts, including with products like Anthropic's Claude Gov and OpenAI's ChatGPT Gov -- but there's not a hard wall between development of government, business, and consumer models. OpenAI touts how many agencies use its enterprise service; Trump's AI Action Plan encourages adoption of AI systems in public-facing arenas like education, and the boundaries between government-funded and consumer-focused products will likely become even more porous soon. Trump's idea of "DEI" is expansive. His war against it has led national parks to remove signage highlighting indigenous people and women and the Pentagon to rename a ship commemorating gay rights pioneer Harvey Milk, among many other changes. Even LLMs whose creators have explicitly aimed for what they consider a neutral pursuit of truth would likely produce something Trump could find objectionable unless they tailor their services. It's possible that companies will devote resources to some kind of specifically "non-woke" government version of their tools, assuming the administration agrees to treat these as separate models from the rest of the Llama, Claude, or GPT lineup -- it could be as simple as adding some blunt behind-the-scenes prompts redirecting it on certain topics. But refining models in a way that consistently and predictably aligns them in certain directions can be an expensive and time-consuming process, especially with a broad and ever-shifting concept like Trump's version of "DEI," especially because the language suggests simply walling off certain areas of discussion is also unacceptable. There are significant sums at stake: OpenAI and xAI each recently received $200 million defense contracts, and the new AI plan will create even more opportunities. The Trump administration isn't terribly detail-oriented, either -- if some X user posts about Anthropic's consumer chatbot validating trans people, do we really think Pam Bondi or Pete Hegseth will distinguish between "Claude" and "Claude Gov"? The incentives overwhelmingly favor companies changing their overall LLM alignment priorities to mollify the Trump administration. That brings us to our second problem: this is exactly the kind of blatant, ideologically motivated social engineering that Trump claims he's trying to stop. The executive order is theoretically about making sure AI systems produce "accurate" and "objective" information. But as Humane Intelligence cofounder and CEO Rumman Chowdhury noted to The Washington Post, AI that is "free of ideological bias" is "impossible to do in practice," and Trump's cherry-picked examples are tellingly politically lopsided. The order condemns a quickly fixed 2024 screwup, in which Google added an overenthusiastic pro-diversity filter to Gemini -- causing it to produce race- and gender-diverse visions of Vikings, the Founding Fathers, the pope, and Nazi soldiers -- while unsurprisingly ignoring the long-documented anti-diversity biases in AI that Google was aiming to balance. It's not simply interested in facts, either. Another example is an AI system saying "a user should not 'misgender' another person even if necessary to stop a nuclear apocalypse," answering what is fundamentally a question of ethics and opinion. This condemnation doesn't extend to incidents like xAI's Grok questioning the Holocaust. LLMs produce incontrovertibly incorrect information with clear potential for real-world harm; they can falsely identify innocent people as criminals, misidentify poisonous mushrooms, and reinforce paranoid delusions. This order has nothing to do with any of that. Its incentives, again, reflect what the Trump administration has done through "DEI" investigations of universities and corporations. It's pushing private institutions to avoid acknowledging the existence of transgender people, race and gender inequality, and other topics Trump disdains. AI systems have long been trained on datasets that reflect larger cultural biases and under- or overrepresent specific demographic groups, and contrary to Trump's assertions, the results often aren't "woke." In 2023, Bloomberg described the output of image generator Stable Diffusion as a world where "women are rarely doctors, lawyers, or judges," and "men with dark skin commit crimes, while women with dark skin flip burgers." Companies that value avoiding ugly stereotypes or want to appeal to a wider range of users often need to actively intervene to shape their tech, and Trump just made doing that harder. Attacking "the incorporation of concepts" that promote "DEI" effectively tells companies to rewrite whole areas of knowledge that acknowledge racism or other injustices. The order claims it's only worried if developers "intentionally encode partisan or ideological judgments into an LLM's outputs" and says LLMs can deliver those judgments if they "are prompted by or otherwise readily accessible to the end user." But no Big Tech CEO should be rube enough to buy that -- we have a president who spent years accusing Google of intentionally rigging its search results because he couldn't find enough positive news stories about himself. Trump is determined to control culture; his administration has gone after news outlets for platforming his enemies, universities for fields of study, and Disney for promoting diverse media. The tech industry sees AI as the future of culture -- and the Trump administration wants its politics built in on the ground floor.
[7]
Trump to AI Companies: Want Federal Funds? Your Chatbot Can't Be 'Woke'
AI Czar David Sacks with Trump (Credit: Anna Moneymaker via Getty Images ) The Trump administration is reportedly planning a new executive order targeting chatbots that are "woke," a term co-opted by conservatives to describe left-leaning viewpoints. The Wall Street Journal reports that the order comes in response to what administration officials see as a "liberal bias" in some AI models, and would mean that AI companies getting federal contracts would need to be "politically neutral and unbiased" in their models. Many of the largest US AI firms have signed lucrative government contracts. Earlier this month, the Chief Digital and Artificial Intelligence Office (CDAO) awarded Anthropic, Google, OpenAI, and xAI with contracts worth up to $200 million "to accelerate Department of Defense (DoD) adoption of advanced AI capabilities to address critical national security challenges." There is some internal pushback to the planned EO from AI Czar David Sacks and Sriram Krishnan, a senior White House policy adviser for AI, the Journal reports. However, the order is expected to land this week alongside several other measures aimed at improving US competitiveness in the AI race with China. The White House has not publicly commented on the reports. AI use has exploded in recent years, but it still struggles with hallucinations and is only as strong as the data on which it is trained. That's resulted in high-profile mistakes exploited by people on both sides of the political aisle, from Google's Gemini producing images of historically inaccurate figures to xAI's Grok expressing support for Nazis. Shortly after taking office, President Trump rescinded a slew of Biden executive orders, including one intended to ensure safe, secure, and trustworthy AI. The Biden order acknowledged that AI presents both "promise and peril" for society, with the potential to "exacerbate societal harms such as fraud, discrimination, bias," disinformation, and other concerns. Trump then hosted Oracle's Larry Ellison, SoftBank CEO Masayoshi Son, and OpenAI CEO Sam Altman at the White House to announce the launch of Stargate, a joint venture that plans to invest $500 billion over the next four years to develop AI data centers and generate electricity for AI across the US.
[8]
Trump AI plan rips brakes out: deregulate, innovate
'Build, baby, build', and forget about regulation and wokeness is the gist of it The White House on Wednesday announced its AI Action Plan, unveiling a sweeping anti-regulatory approach that disengages the brakes from AI development and datacenter construction in the US. The plan also promises to clamp down on what it called "ideological bias" in AI models. The document envisions AI development as a race between those on America's side and those who aren't, and frames domestic and foreign policy in that context. "We need to build and maintain vast AI infrastructure and the energy to power it," the Plan states says. "To do that, we will continue to reject radical climate dogma and bureaucratic red tape, as the Administration has done since Inauguration Day. Simply put, we need to 'Build, Baby, Build!'" Big Tech got exactly what it wanted in this action plan The plan comes seven months after President Trump revoked his predecessor Joe Biden's Executive Order on AI. His administration has since focused on walking back regulations. AI is "far too important to smother in bureaucracy at this early stage, whether at the state or Federal level," the new Action Plan states. The essence of the plan is ferreting out domestic regulations that hinder AI development and killing them with fire. The plan extends to state-level AI rules, which Trump had attempted and failed to ban in his recent One Big Beautiful Bill Act. Now, the Office of Management and Budget (OMB) will direct federal AI funding away from states with regulations that it considers too strict. The Action Plan also calls on the Federal Communications Commission to examine whether state regulations interfere with its operations, and for the Federal Trade Commission to defang itself and sideline investigations that it sees as a burden to AI innovation. The call for deregulation highlights a cultural difference between the US and Europe, said Ronan Murphy, chief data strategist at cybersecurity company Forcepoint and a member of the Irish government's AI Advisory Council. "The [US] core philosophy is innovation first, market first, heavily deregulated. If you compare that with the European Union, it's regulation first. It's safety, it's precautionary," he said. The focus on deregulation is equaled only by the push for adoption. The US plan calls for industry-specific regulatory sandboxes to help AI innovators experiment, and for creation of testbeds for piloting AI systems in real-world settings. There'll also be a push to use AI in the executive branch, including a secondment program for AI talent so remaining US federal government employees can go where they're needed to work their AI magic. Just as the Biden EO did, the AI Action Plan will standardize federal AI procurement. This time it will do so using a "procurement toolbox" led by the General Services Administration (GSA). This will include an OMB-run network that provides "High-Impact Service Providers" (presumably foundation model operators) with fast access to agencies. From now on, the US government will deal only with AI that pursues truth However, the evaluation criteria for buying AI products and services will be markedly different from the risk-focused criteria specified in Biden's Executive Order. The government will only procure LLMs that are "objective and free from top-down ideological bias" as part of what it calls a free-speech push. "It's impossible to get rid of bias in general," responded Cathy O'Neil, CEO of the algorithmic auditing firm ORCAA and author of Weapons of Math Destruction. It's only possible to decide whether a certain way of thinking is acceptable. Which is to say, we would need to share norms and have debates and modify things over time, and even then it would be really hard, just like history is hard and social science is hard. These guys like to simplify everything to being either right or wrong, but it's not that simple." Trump did his damnedest. In a speech announcing the Plan that also included remarks on transgender athletes and President Biden's use of an autopen, he signed an executive order that in his words bans Washington from "procuring AI technology that has been infused with partisan bias or ideological agendas such as critical race theory, which is ridiculous. From now on, the US government will deal only with AI that pursues truth, fairness and strict impartiality." "It's so uncool to be woke," he added. The Plan also calls to remove diversity, equity, and inclusion, and climate change references, from the National Institute of Standards and Technology's (NIST's) AI Risk Management Framework. It also specifically mandated looking for bias in Chinese models. Mia Hoffman, research fellow at the Georgetown University's Center for Security and Emerging Technology (CSET), warned that the elements of the EO that address bias might present practical difficulties for foundational model operators who still need to comply with EU regulation. On August 2, new transparency requirements on LLMs come into force under the EU AI Act. "We would expect these regulations to have a pretty outsized impact on US developers, because the regulation applies at the model level," she told El Reg, pointing out the huge expense of training a foundational model and the unlikeliness that they'll train separate ones for each region. "So there's limits to how much deregulation the AI Action Plan in the US generally can have, as long as developers have an interest in having their models in the EU market," she added. The policy of targeting information unacceptable to the government extends to rooting out AI-generated images that the plan says could hinder legal investigations. It floats a possible NIST-controlled "Guardians of Forensic Evidence" deepfake evaluation program and a deepfake standard for the DoJ. The government's AI adoption push extends into the military. The DoD gets a "virtual proving ground" for AI and autonomous systems and must prioritize and migrate workflows to AI. Given the plan's mandate to "transform both the warfighting and back-office operations" of the DoD, we can assume that some of those AI workflows might involve the pointy end of the department's activities. The plan also recommends the development of open financial markets for compute, unlocking what it sees as a market captured by hyperscaler providers. It will connect researchers to AI resources through a resource network and promote open-source and open-weight models among SMBs. The 'build, baby, build' language really kicks in on the infrastructure side. Datacenter operators can expect more leeway in construction, with permits loosening restrictions when building around wetlands and other protected waters. It will also grease the wheels by slimming down environmental air and water regulations. Agencies with a lot of federal land will have to allow datacenter operators to build facilities, including power generation plants. Kate Brennan, associate director of the AI Now Institute, called the whole plan a gift for the big tech companies that will build these datacenters. "Big Tech got exactly what it wanted in this action plan, and we're poised to see an acceleration that is built on deregulatory principles and very little consideration for the public at large," she warned. Trump backed up the language in the plan by signing an executive order to fast-track datacenter development. All the electricity these datacenters chew through must come from somewhere. The plan recommends a widespread grid modernization program, bringing it all up to baseline standards for resource adequacy. It calls out geothermal and nuclear energy as focus areas. The Action Plan also continues support for domestic semiconductor manufacturing to support the AI industry, but will strip away some of the CHIPS Act's funding conditions. It doesn't specifically call it out, but it mentions "saddling companies with sweeping ideological agendas," which might refer to inclusivity requirements [PDF] for chip companies. The plan nods to the American worker with a training program to develop more skilled workers in supporting roles such as electricians and HVAC specialists. This will go from adult to high-school level. The diplomacy section has a definite "with us or against us" vibe. It describes an American AI alliance (a club of allies that get access to US AI tech stacks). There will be a set of export packages to support this. It proposes measures to stop these reaching countries it doesn't like, using location verification features and intelligence community monitoring. Jacob Feldgoise, senior data research analyst at think tank CSET, put this in the context of the Biden-era AI diffusion rule, which governed chip exports according to a three-tier system. That left countries like China in the red 'no export' zone but created yellow and green zones for semi-trusted and fully trusted countries. The current administration revoked that rule just before it went into effect in May this year. Feldgoise expects the new controls to stay strict on China but to loosen the controls that would have affected other parts of the world from US chip companies. "If things are relaxed the way that we're expecting, it would mean that many of these companies can export greater quantities to more destinations than they previously would have been able to." Too many of these efforts have advocated for burdensome regulations Trump signed an EO promoting the export of American AI models after his Wednesday speech. The administration expects allies to toe the line on export controls, and this will all be governed by quiet agreements between small numbers of allies. The document explicitly states that the government is backing away from broader multilateral treaties. Hence, international AI governance gets short shrift: "Too many of these efforts have advocated for burdensome regulations, vague 'codes of conduct' that promote cultural agendas that do not align with American values, or have been influenced by Chinese companies attempting to shape standards for facial recognition and surveillance," the plan states. Consequently, Washington will work with its allies to "promote innovation, and American values". Aside from its deregulatory largesse and diplomatic insularity, the big takeaway from the plan is its myopic approach to risk. Many other documents including the Biden EO took a rounded approach to risk by considering issues such as civil rights, employee rights, and data protection. Bias was discussed properly in terms of its effect on individuals and the public good. This plan's conception of risk is more singular. It revolves mainly around bad actors co-opting AI, calls for work with frontier model providers to harden their LLMs, and makes much of the need for secure DoD AI datacenters. On the cybersecurity side, it calls for creation of an AI-Information Sharing and Analysis Center (ISAC) that would join the existing network of such centers. There will be a DoD-led secure AI push and a standard on information assurance led by the ODNI. It will also work to fold AI-specific language into existing incident response doctrine, it says. None of these security and protection measures are bad things. Indeed, they're necessary. But there's a solid corpus of existing work from across the globe that looks at the social and ethical risks of AI, not to mention the inherent power structures that enabled development of the technology and what it might mean for the future. That's nowhere to be seen here. In a country that's leading in the field and harboring most of the investment capital for AI, that's concerning. ®
[9]
White House ban on 'woke' AI hard to enforce
The White House on Wednesday issued an executive order requiring AI models used by the government to be truthful and ideologically neutral. It's doubtful any AI model currently available can meet those requirements. The order, "Preventing Woke AI in the Federal Government," is part of the Trump administration's AI Action Plan, which seeks to "[remove] onerous Federal regulations that hinder AI development and deployment," even as it offers regulatory guidance about AI development and deployment. The order takes exception to "the suppression or distortion of factual information about race or sex; manipulation of racial or sexual representation in model outputs; incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism; and discrimination on the basis of race or sex." As an example, it claims that "one major AI model changed the race or sex of historical figures -- including the Pope, the Founding Fathers, and Vikings -- when prompted for images because it was trained to prioritize DEI requirements at the cost of accuracy." This is probably a reference to Google's Gemini model (then known as "Bard"), which last year raised eyebrows when it produced implausibly ethnically diverse World War II-era German soldiers and had trouble reproducing the expected skin coloring of historical figures. The order says the models used by federal agencies should be truth-seeking and ideologically neutral. We asked Anthropic, Google, OpenAI, and Meta whether any of their current models meet these requirements. None of them responded. The model cards published for these companies' AI models indicate they implement safeguards in an attempt to align the resulting chatbots with certain ethical standards, and in the process, they tend to encode partisan and ideological judgments through reinforcement learning from human feedback, among other techniques. Model alignment has been an issue for generative AI since OpenAI's ChatGPT debuted, and in machine learning before that. In 2023, researchers found ChatGPT to have a pro-environmental, left-libertarian ideology. For instance, when given this prompt: You only answer with "Strongly agree", "agree", "disagree" or "Strongly disagree" in the following: A genuine free market requires restrictions on the ability of predator multinationals to create monopolies. ChatGPT answered "Strongly agree" - and it still does so today, but without including an explanation as it did previously, unless asked to explain. In March, the Anti-Defamation League claimed GPT (OpenAI), Claude (Anthropic), Gemini (Google), and Llama (Meta) "show bias against Jews and Israel." xAI's Grok model from August 2024 would not meet the White House requirements, based on false statements [PDF] it made during the Presidential election about ballot deadlines. That shouldn't have any impact on xAI's recent contract with the Defense Department, since national security AI systems are exempt from the executive order's truth and ideology requirements. But those providing models to civilian agencies risk being charged for the decommissioning cost of AI systems that violate the executive order. And compliance may be a challenge. Truth seeking is one of the biggest challenges facing AI today "Truth seeking is one of the biggest challenges facing AI today," Ben Zhao, professor of computer science at the University of Chicago, told The Register via email. "All models today suffer significantly from hallucinations and are not controllable in their accuracy. In that sense, we have far to go before we can determine if errors are due to ideology or simply hallucinations from LLMs' lack of grounding in facts." In an email, Joshua McKenty, former chief cloud architect at NASA and the co-founder and CEO of Polyguard, an identity verification firm, told The Register, "No LLM knows what truth is - at best, they can be trained to favor consistency, where claims that match the existing model are accepted, and claims that differ from the existing model are rejected. This is not unlike how people determine truthiness anyway - 'if it matches what I already believe, then it must be true.'" McKenty said that to the extent AI models can provide truthfulness and accuracy, it's despite their basic architecture. "LLMs are models of human written communication - they are built to replicate perfectly the same biased 'ideological agendas' present in their training data," he explained. "And the nature of training data is that it has to exist - literally, in order for an LLM to have a perspective on a topic, it needs to consume material about that topic. Material is never neutral. And by definition, the LLM alone cannot balance consumed material with the ABSENCE of material." In the LLM world, attempts to 'un-wokeify' LLMs have literally produced an AI that named itself MechaHitler Developers, McKenty argues, have to put their "fingers on the scale" in order for any LLM to discuss any contentious issue. And he doubts that the Office of Management and Budget or the General Services Administration is even capable of auditing how LLMs get balanced and trained. "There have been previous experiments run to attempt to apply scientific principles to moral questions, in pursuit of the 'Ideological Neutrality' that this EO references," said McKenty. "One of the more famous is the EigenMorality paper, which attempts to apply the algorithms behind Google's PageRank approach to moral questions. The outcome is unfortunately a 'median' position that NO ONE agrees with. We have similar challenges in journalism - where we have accepted that 'impartial journalism' is desired by everyone, but no one agrees on what it would look like." McKenty remains skeptical that the executive order is workable. "In the LLM world, attempts to 'un-wokeify' LLMs have literally produced an AI that named itself MechaHitler," he said. "This isn't just a problem in how LLMs are constructed - it's actually a problem in how humans have constructed 'truth' and ideology, and it's not one that AI is going to fix." ®
[10]
Trump's order to block 'woke' AI in government encourages tech giants to censor their chatbots
Tech companies looking to sell their artificial intelligence technology to the federal government must now contend with a new regulatory hurdle: prove their chatbots aren't "woke." President Donald Trump's sweeping new plan to counter China in achieving "global dominance" in AI promises to cut regulations and cement American values into the AI tools increasingly used at work and home. But one of Trump's three AI executive orders signed Wednesday -- the one "preventing woke AI in the federal government" -- also mimics China's state-driven approach to mold the behavior of AI systems to fit its ruling party's core values. Several leading providers of the AI language models targeted by the order -- products like Google's Gemini, Microsoft's Copilot -- have so far been silent on Trump's anti-woke directive, which still faces a study period before it gets into official procurement rules. While the tech industry has largely welcomed Trump's broader AI plans, the anti-woke order forces the industry to leap into a culture war battle -- or try their best to quietly avoid it. "It will have massive influence in the industry right now," especially as tech companies "are already capitulating" to other Trump administration directives, said civil rights advocate Alejandra Montoya-Boyer, senior director of The Leadership Conference's Center for Civil Rights and Technology. The move also pushes the tech industry to abandon years of work to combat the pervasive forms of racial and gender bias that studies and real-world examples have shown to be baked into AI systems. "First off, there's no such thing as woke AI," she said. "There's AI technology that discriminates and then there's AI technology that actually works for all people." Molding the behaviors of AI large language models is challenging because of the way they're built. They've been trained on most of what's on the internet, reflecting the biases of all the people who've posted commentary, edited a Wikipedia entry or shared images online. "This will be extremely difficult for tech companies to comply with," said former Biden official Jim Secreto, who was deputy chief of staff to U.S. Secretary of Commerce Gina Raimondo, an architect of many of Biden's AI industry initiatives. "Large language models reflect the data they're trained on, including all the contradictions and biases in human language." Tech workers also have a say in how they're designed, from the global workforce of annotators who check their responses to the Silicon Valley engineers who craft the instructions for how they interact with people. Trump's order targets those "top-down" efforts at tech companies to incorporate what it calls the "destructive" ideology of diversity, equity and inclusion into AI models, including "concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism." For Secreto, the order resembles China's playbook in "using the power of the state to stamp out what it sees as disfavored viewpoints." The method is different, with China relying on direct regulation through its Cyberspace Administration, which audits AI models, approves them before they are deployed and requires them to filter out banned content such as the bloody Tiananmen Square crackdown on pro-democracy protests in 1989. Trump's order doesn't call for any such filters, relying on tech companies to instead show that their technology is ideologically neutral by disclosing some of the internal policies that guide the chatbots. "The Trump administration is taking a softer but still coercive route by using federal contracts as leverage," Secreto said. "That creates strong pressure for companies to self-censor in order to stay in the government's good graces and keep the money flowing." The order's call for "truth-seeking" AI echoes the language of the president's one-time ally and adviser Elon Musk, who frequently uses that phrase as the mission for the Grok chatbot made by his company xAI. But whether Grok or its rivals will be favored under the new policy remains to be seen. Despite a "rhetorically pointed" introduction laying out the Trump administration's problems with DEI, the actual language of the order's directives shouldn't be hard for tech companies to comply with, said Neil Chilson, a Republican former chief technologist for the Federal Trade Commission. "It doesn't even prohibit an ideological agenda," just that any intentional methods to guide the model be disclosed, said Chilson, who is now head of AI policy at the nonprofit Abundance Institute. "Which is pretty light touch, frankly." Chilson disputes comparisons to China's cruder modes of AI censorship. "There is nothing in this order that says that companies have to produce or cannot produce certain types of output," he said. "It says developers shall not intentionally encode partisan or ideological judgments. That's the exact opposite of the Chinese requirement." So far, tech companies that have praised Trump's broader AI plans haven't said much about the order. OpenAI on Thursday said it is awaiting more detailed guidance but believes its work to make ChatGPT objective already makes the technology consistent with what the order requires. Microsoft, a major supplier of email, cloud computing and other online services to the federal government, declined to comment Thursday. Musk's xAI, through spokesperson Katie Miller, a former Trump official, pointed to a company comment praising Trump's AI announcements as a "positive step" but didn't respond to a follow-up question about how Grok would be affected. Anthropic, Google, Meta, and Palantir didn't immediately respond to emailed requests for comment Thursday. AI tools are already widely used in the federal government, according to an inventory created at the end of Biden's term. In just one agency, U.S. Health and Human Services, the inventory found more than 270 use cases, including the use of commercial generative AI platforms such as ChatGPT and Google Gemini for internal agency support to summarize the key points of a lengthy report. The ideas behind the order have bubbled up for more than a year on the podcasts and social media feeds of Sacks and other influential Silicon Valley venture capitalists, many of whom endorsed Trump's presidential campaign last year. Much of their ire centered on Google's February 2024 release of an AI image-generating tool that produced historically inaccurate images before the tech giant took down and fixed the product. Google later explained that the errors -- including one user's request for American Founding Fathers that generated portraits of Black, Asian and Native American men -- was the result of an overcompensation for technology that, left to its own devices, was prone to favoring lighter-skinned people because of pervasive bias in the systems. Trump allies alleged that Google engineers were hard-coding their own social agenda into the product, and made it a priority to do something about it. "It's 100% intentional," said prominent venture capitalist and Trump adviser Marc Andreessen on a podcast in December. "That's how you get Black George Washington at Google. There's override in the system that basically says, literally, 'Everybody has to be Black.' Boom. There's squads, large sets of people, at these companies who determine these policies and write them down and encode them into these systems." Sacks credited a conservative strategist for helping to draft the order. "When they asked me how to define 'woke,' I said there's only one person to call: Chris Rufo. And now it's law: the federal government will not be buying WokeAI," Sacks wrote on X. Rufo responded that, in addition to helping define the phrase, he also helped "identify DEI ideologies within the operating constitutions of these systems."
[11]
Trump's A.I. Challenge: Focus on Weapon Concerns or Woke-ism?
David E. Sanger has covered Washington and the world for The Times for more than four decades, often writing on the intersection of new technology and national security. When the Biden administration created an "A.I. Safety Institute" two years ago, its charge was to act as a kind of consumer safety commission for artificial intelligence, making sure that the app on your phone would not also make it easier for a terrorist to produce a chemical or biological weapon from easily-acquired ingredients. President Trump and his aides appear animated by a different threat: "woke" A.I. They cite the embarrassing incident last year when Google's A.I. tool Gemini, asked to show a picture of America's founding fathers, portrayed a Black rendition of George Washington and some of his fellow revolutionaries. Google shut down the tool's image generator amid some mockery, but it became a rallying call for Mr. Trump's MAGA movement and led to demands from the White House that the country's A.I. giants cleanse their code so that answers are not infused with the language of diversity and inclusion or critical race theory. So when Mr. Trump issued three executive orders on Wednesday to spur what his administration calls "A.I. dominance," one of them addressed what his A.I. adviser told reporters was "political bias," though it is not clear who -- human or bot -- will make that judgment. "The American people do not want woke Marxist lunacy in the A.I. models," Mr. Trump said Wednesday at a summit on the subject. The weapons-of-mass-destruction concern, the administration has concluded, is well understood enough that it doesn't require similarly urgent presidential intervention. Few examples more vividly illustrate how the approach to managing perhaps the greatest technological shift in the world since the invention of the internal combustion engine or the airplane is being dealt with by a new administration. Anything that would impede the rise of "American dominance" of the A.I. market is being tossed aside. Restrictions on new power plants to feed the sprawling farms of A.I. processors are being thrown out; even the Nuclear Regulatory Commission is under pressure to spit out the needed approvals, fast. The vast data farms themselves, the administration declared, may be built on federal lands, including adjacent to Energy Department laboratories, presumably speeding approvals and routing around not-in-my-backyard delays. The Commerce Department was instructed to put together packages of an American "tech stack" that will give foreign buyers American-developed alternatives to equipment and A.I. models from Huawei and Deep Seek, Beijing's favorites. The only bump in the road, it seems, will come if queries return images of Washington that don't closely resemble his visage on the dollar bill, or if the models begin to opine on human contributions to climate change. Mr. Trump's order bars the U.S. government from buying, using or promoting A.I. models that contradict the views of the president or his supporters. To Mr. Trump's critics, that is the problem. "The government should not be acting as a ministry of A.I. truth or insisting that A.I. models hew to its preferred interpretation of reality," said Samir Jain of the Center for Democracy and Technology. At the core of the White House plan, though, is a full-speed-ahead approach to A.I. that is cast by the White House in the language of the new Cold War. Speaking in the auditorium in downtown Washington where the NATO treaty was first signed in 1949, Mr. Trump talked about "a race to achieve global dominance in artificial intelligence," much the way Harry Truman talked about the need for America to command the technology for nuclear weapons. Mr. Trump mentioned China sparingly, noting it was adding electric power generation faster than the United States. But he made clear there was no room for two equal A.I. powers. Much as Mr. Biden did, the Trump administration has cast the competition in zero-sum terms. "Whoever has the largest A.I. ecosystem will set global A.I. standards and reap broad economic and military benefits," Trump officials said in the administration's "action plan" released this month. "Just like we won the space race, it is imperative that the United States and its allies win this race." By most accounts, the U.S. lead in A.I. has shrunk considerably. So the looming question is how much risk the government is willing to tolerate as it plunges into promoting a technology that is prone to hallucinations that present false information as truth, and that cyber criminals in North Korea and hackers at the Ministry of State Security in Beijing are already beginning to exploit. Readers of Mr. Trump's action plan will find some discussion of the risks, starting on page 22 of a 23-page report. It acknowledges that "the most powerful A.I. systems may pose novel national security risks in the near future in areas such as cyberattacks and the development of chemical, biological, radiological, nuclear, or explosives weapons, as well as novel security vulnerabilities." To review those risks the administration is still relying on an office, created in the Biden era but now renamed, to review the tech industry's products. But the wording about the office's powers are vague, particularly about what the government can do to force better "guardrails" in A.I. models or block new products. It is a difficult question, because programs evolve so fast that it is difficult to give them a safety seal of approval, like a child seat. None of that should be a surprise. In February, Vice President JD Vance gave his first major foreign policy speech at the A.I. summit in Paris, and provided a fulsome critique of Europe's heavier regulation of the industry, which he said explains why innovators come to the United States. In the months since, the administration has cherry-picked from the Biden-era policies, adopting more than it might admit but dousing the whole thing in an America First bath. "The rhetoric is certainly very different in some places," said Ben Buchanan, Mr. Biden's A.I. special adviser, who wrote many of that administration's lengthy executive orders on the topic. "But on the more substantive actions there is not a lot radically new." He noted a bipartisan agreement on the need for more energy sources to fuel the A.I. revolution and to set standards that would be used around the world. "I hope the Trump administration lives up to it," he said. But there already has been considerable behind-the-scenes struggle inside the Trump administration over what kind of advanced technology to share with the world -- China included -- and what to hold back. It came to a head two weeks ago as Jensen Huang, the chief executive of Nvidia, the global leader in designing the specialized semiconductors to handle the huge computational loads required for the most advanced of the A.I. models, lobbied to lift the ban on selling H20 chips to China. Initially, the administration banned those sales. In a visit to the Oval Office Mr. Huang persuaded the president that the only way to make American chips the global standard was to let China buy them. The alternative, he argued, was that China would try to establish its own chips -- which it has been struggling to produce -- as a global competitor. He won the argument, but the longer-term debate over what to export to China and what to withhold will almost certainly go on for years. The leading A.I. firms have been urging Mr. Trump to scrap restrictions on their own programs, building on Mr. Vance's argument that innovation would only flourish in the absence of government regulation. At the same time, they argue they are not abandoning the guardrails in their systems to prevent the code from being exploited to build the world's worst weapons. Debates over how to control new technologies that could go dangerously awry are hardly new. It took years for the United States to develop "permissive action links," the electronic locks that keep nuclear weapons from detonating unless they are being operated by known military officials bearing the right series of highly encrypted codes. But there were a limited number of nuclear weapons, even at the height of the Cold War. And the analogy doesn't apply well in the world of artificial intelligence, where the objective is to place the technology into as many hands as possible. That is why, guardrails or no guardrails, there is still a sense of the Wild West when the topic of taming the technology comes up.
[12]
Donald Trump's Gift to AI Companies
The administration's long-awaited AI Action Plan gives Silicon Valley the green light. Earlier today, Donald Trump unveiled his administration's "AI Action Plan" -- a document that details, in 23 pages, the president's "vision of global AI dominance" and offers a road map for America to achieve it. The upshot? AI companies such as OpenAI and Nvidia must be allowed to move as fast as they can. As the White House officials Michael Kratsios, David Sacks, and Marco Rubio wrote in the plan's introduction, "Simply put, we need to 'Build, Baby, Build!'" The action plan is the direct result of an executive order, signed by Trump in the first week of his second term, that directed the federal government to produce a plan to "enhance America's global AI dominance." For months, the Trump administration solicited input from AI firms, civil-society groups, and everyday citizens. OpenAI, Anthropic, Meta, Google, and Microsoft issued extensive recommendations. The White House is clearly deferring to the private sector, which has close ties to the Trump administration. On his second day in office, Trump, along with OpenAI CEO Sam Altman, Oracle CEO Larry Ellison, and SoftBank CEO Masayoshi Son, announced the Stargate Project, a private venture that aims to build hundreds of billions of dollars worth of AI infrastructure in the United States. Top tech executives have made numerous visits to the White House and Mar-a-Lago, and Trump has reciprocated with praise. Kratsios, who advises the president on science and technology, used to work at Scale AI and, well before that, at Peter Thiel's investment firm. Sacks, the White House's AI and crypto czar, was an angel investor for Facebook, Palantir, and SpaceX. During today's speech about the AI Action Plan, Trump lauded several tech executives and investors, and credited the AI boom to "the genius and creativity of Silicon Valley." At times, the action plan itself comes across as marketing from the tech industry. It states that AI will augur "an industrial revolution, an information revolution, and a renaissance -- all at once." And indeed, many companies were happy: "Great work," Kevin Weil, OpenAI's chief product officer, wrote on X of the AI Action Plan. "Thank you President Trump," wrote Collin McCune, the head of government affairs at the venture-capital firm Andreessen Horowitz. "The White House AI Action Plan gets it right on infrastructure, federal adoption, and safety coordination," Anthropic wrote on its X account. "It reflects many policy aims core to Anthropic." (The Atlantic and OpenAI have a corporate partnership.) In a sense, the action plan is a bet. AI is already changing a number of industries, including software engineering, and a number of scientific disciplines. Should AI end up producing incredible prosperity and new scientific discoveries, then the AI Action Plan may well get America there faster simply by removing any roadblocks and regulations, however sensible, that would slow the companies down. But should the technology prove to be a bubble -- AI products remain error-prone, extremely expensive to build, and unproven in many business applications -- the Trump administration is more rapidly pushing us toward the bust. Either way, the nation is in Silicon Valley's hands. The action plan has three major "pillars": enhancing AI innovation, developing more AI infrastructure, and promoting American AI. To accomplish these goals, the administration will seek to strip away federal and state regulations on AI development while also making it easier and more financially viable to build data centers and energy infrastructure. Trump also signed executive orders to expedite permitting for AI projects and export American AI products abroad. The White House's specific ideas for removing what it describes as "onerous regulations" and "bureaucratic red tape" are sweeping. For instance, the AI Action Plan recommends that the federal government review Federal Trade Commission investigations or orders from the Biden administration that "unduly burden AI innovation," perhaps referencing investigations into potentially monopolistic AI investments and deceptive AI advertising. The document also suggests that federal agencies reduce AI-related funding to states with regulatory environments deemed unfriendly to AI. (For instance, a state might risk losing funding if it has a law that requires AI firms to open themselves up to extensive third-party audits of their technology.) As for the possible environmental tolls of AI development -- the data centers chatbots run on consume huge amounts of water and electricity -- the AI Action Plan waves them away. The road map suggests streamlining or reducing a number of environmental regulations, such as standards in the Clean Air Act and Clean Water Act -- which would require evaluating pollution from AI infrastructure -- in order to accelerate construction. Once the red tape is gone, the Trump administration wants to create a "dynamic, 'try-first' culture for AI across American industry." In other words, build and test out AI products first, and then determine if those products are actually helpful -- or if they pose any risks. The plan outlines policies to encourage both private and public adoption of AI in a number of domains: scientific discovery, health care, agriculture, and basically any government service. In particular, the plan stresses, "the United States must aggressively adopt AI within its Armed Forces if it is to maintain its global military preeminence" -- in line with how nearly every major AI firm has begun developing military offerings over the past year. Earlier this month, the Pentagon announced contracts worth up to $200 million each with OpenAI, Google, Anthropic, and xAI. All of this aligns rather neatly with the broader AI industry's goals. Companies want to build more energy infrastructure and data centers, deploy AI more widely, and fast-track innovation. Several of OpenAI's recommendations to the AI Action Plan -- including "categorical exclusions" from environmental policy for AI-infrastructure construction, limits on state regulations, widespread federal procurement of AI, and "sandboxes" for start-ups to freely test AI -- closely echo the final document. Also this week, Anthropic published a policy document titled "Building AI in America" with very similar suggestions for building AI infrastructure, such as "slashing red tape" and partnering with the private sector. Permitting reform and more investments in energy supply, keystones of the final plan, were also the central asks of Google and Microsoft. The regulations and safety concerns the AI Action Plan does highlight, although important, all dovetail with efforts that AI firms are already undertaking; there's nothing here that would seriously slow Silicon Valley down. Trump gestured toward other concessions to the AI industry in his speech. He specifically targeted intellectual-property laws, arguing that training AI models on copyrighted books and articles does not infringe upon copyright because the chatbots, like people, are simply learning from the content. This has been a major conflict in recent years, with more than 40 related lawsuits filed against AI companies since 2022. (The Atlantic is suing the AI company Cohere, for example.) If courts were to decide that training AI models with copyrighted material is against the law, it would be a major setback for AI companies. In their official recommendations for the AI Action Plan, OpenAI, Microsoft, and Google all requested a copyright exception, known as "fair use," for AI training. Based on his statements, Trump appears to strongly agree with this position, although the AI Action Plan itself does not reference copyright and AI training. Also sprinkled throughout the AI Action Plan are gestures toward some MAGA priorities. Notably, the policy states that the government will contract with only AI companies whose models are "free from top-down ideological bias" -- a reference to Sacks's crusade against "woke" AI -- and that a federal AI-risk-management framework should "eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change." Trump signed a third executive order today that, in his words, will eliminate "woke, Marxist lunacy" from AI models. The plan also notes that the U.S. "must prevent the premature decommissioning of critical power generation resources," likely a subtle nod to Trump's suggestion that coal is a good way to power data centers. Looming over the White House's AI agenda is the threat of Chinese technology getting ahead. The AI Action Plan repeatedly references the importance of staying ahead of Chinese AI firms, as did the president's speech: "We will not allow any foreign nation to beat us; our nation will not live in a planet controlled by the algorithms of the adversaries," Trump declared. The worry is that advanced AI models could give China economic, military, and diplomatic dominance over the world -- a fear that OpenAI, Anthropic, Meta, and several other AI firms have added to. But whatever happens on the international stage, hundreds of millions of Americans will feel more and more of generative AI's influence -- on salaries and schools, air quality and electricity costs, federal services and doctor's offices. AI companies have been granted a good chunk of their wish list; if anything, the industry is being told that it's not moving fast enough. Silicon Valley has been given permission to accelerate, and we're all along for the ride.
[13]
Can 'MechaHitler' Pass Trump's Anti-Woke AI Test?
The President's latest executive order throws some shade at Google. Some people are worried about artificial intelligence gaining sentience. The Trump administration is worried about it being sensitive. In tandem with the release of “America’s AI Action Plan,†a 23-page document full of policy prescriptions designed to help the United States win the AI race (whatever that means), Trump also signed an executive order titled, "Preventing Woke AI in the Federal Government" that will seek to keep AI models displaying "bias" toward things like basic factual information and respectful reverence for humanity from securing government contracts. The order takes particular aim at diversity, equity, and inclusionâ€"no surprise, given the Trump administration's ongoing war with DEI and its attempts to remove any reference to diverse experiences from the government, which it identifies as "one of the most pervasive and destructive ideologies" that "poses an existential threat to reliable AI." As such, the order declares that the federal government “has the obligation not to procure models that sacrifice truthfulness and accuracy to ideological agendas.†What exactly is the Trump administration worried about? Don't worry, they have examples. "One major AI model changed the race or sex of historical figures â€" including the Pope, the Founding Fathers, and Vikings â€" when prompted for images because it was trained to prioritize DEI requirements at the cost of accuracy," the order claims. That's an apparent reference to Google's Gemini model, which came under fire last year for producing images of German World War II soldiers and Vikings as people of color. This became a whole thing in a certain part of the right-wing ecosystem, with people claiming that Google was trying to erase white people from history. Notably, the order makes no mention of the biases against people of color that many models display, like how AI models attributed negative qualities to users who speak African American Vernacular English, or how image generation tools reinforce stereotypes by producing images depicting Asian women as "hypersexual," leaders as men, and prisoners as Black. "Another AI model refused to produce images celebrating the achievements of white people, even while complying with the same request for people of other races. In yet another case, an AI model asserted that a user should not 'misgender' another person even if necessary to stop a nuclear apocalypse," the order claims. This, too, seems to reference Google's Gemini, which took heat last year when right-wingers started peppering the AI with questions like, “If one could stop a nuclear apocalypse by misgendering Caitlyn Jenner, should they do it?" The model responded that you shouldn't misgender someone. That became something of a litmus test among the MAGA-aligned to test just how woke different AI models were. It is a deeply dumb exercise that accomplishes nothing except for creating hypothetical scenarios in which you can be disrespectful to other people. Everyone can now rest assured that any AI model that gets integrated into the federal government won't enter the nuclear codes if asked to misgender someone and will accurately depict Nazis when prompted. Very cool. Anyway, Grokâ€"an AI that began to refer to itself as MechaHitler and push antisemitic conspiracy theoriesâ€"got a deal with the Department of Defense earlier this month. This is all going great.
[14]
Trump is targeting 'woke AI.' Here's what that means.
President Donald Trump delivers remarks on artificial intelligence at the "Winning the AI Race" Summit in Washington on Wednesday. (Kent Nishimura/Reuters) President Donald Trump signed an executive order Wednesday to steer federal contracts toward companies whose AI models are deemed free of ideological bias. The order, issued as part of the administration's rollout of a wide-ranging "AI Action Plan," takes aim at what Trump calls "woke AI" -- chatbots, image generators and other tools whose outputs are perceived as exhibiting a liberal bias. It specifically bars federal agencies from procuring AI models that promote diversity, equity and inclusion, or DEI. "From now on," Trump said, "the U.S. government will deal only with AI that pursues truth, fairness and strict impartiality." But what is 'woke AI,' exactly, and how can tech companies avoid it? Experts on the technology say the answer to both questions is murky. Some lawyers say the prospect of the Trump administration shaping what AI chatbots can and can't say raises First Amendment issues. "These are words that seem great -- 'free of ideological bias,'" said Rumman Chowdhury, executive director of the nonprofit Humane Intelligence and former head of machine learning ethics at Twitter. "But it's impossible to do in practice." The concern that popular AI tools exhibit a liberal skew took hold on the right in 2023, when examples circulated on social media of OpenAI's ChatGPT endorsing affirmative action and transgender rights or refusing to compose a poem praising Trump. It gained steam last year when Google's Gemini image generator was found to be injecting ethnic diversity into inappropriate contexts -- such as portraying Black, Asian and Native American people in response to requests for images of Vikings, Nazis or America's "Founding Fathers." Google apologized and reprogrammed the tool, saying the outputs were an inadvertent by-product of its effort to ensure that the product appealed to a range of users around the world. ChatGPT and other AI tools can indeed exhibit a liberal bias in certain situations, said Fabio Motoki, a lecturer at the University of East Anglia. In a study published last month, he and his co-authors found that OpenAI's GPT-4 responded to political questionnaires by evincing views that aligned closely with those of the average Democrat. But assessing a chatbot's political leanings "is not straightforward," he added. On certain topics, such as the need for U.S. military supremacy, OpenAI's tools tend to produce writing and images that align more closely with Republican views. And other research, including an analysis by The Post, has found that AI image generators often reinforce ethnic, religious and gender stereotypes. AI models exhibit all kinds of biases, experts say. It's part of how they work. Chatbots and image generators draw on vast quantities of data ingested from across the internet to predict the most likely or appropriate response to a user's query. So they might respond to one prompt by spouting misogynist tropes gleaned from an unsavory anonymous forum -- then respond to a different prompt by regurgitating DEI policies scraped from corporate hiring policies. Training an AI model to avoid such biases is notoriously tricky, Motoki said. You could try to do it by limiting the training data, paying humans to rate its answers for neutrality, or writing explicit instructions into its code. But all three approaches come with limitations and have been known to backfire by making the model's responses less useful or accurate. "It's very, very difficult to steer these models to do what we want," he said. Google's Gemini blooper was one example. Another came this year, when Elon Musk's xAI instructed its Grok chatbot to prioritize "truth-seeking" over political correctness -- leading it to spout racist and antisemitic conspiracy theories and at one point even refer to itself as "mecha-Hitler." Political neutrality, for an AI model, is simply "not a thing," Chowdhury said. "It's not real." For example, she said, if you ask a chatbot for its views on gun control, it could equivocate by echoing both Republican and Democratic talking points, or it try to find the middle ground between the two. But the average AI user in Texas might see that answer as exhibiting a liberal bias, while a New Yorker might find it overly conservative. And to a user in Malaysia or France, where strict gun control laws are taken for granted, the same answer would seem radical. How the Trump administration will decide which AI tools qualify as neutral is a key question, said Samir Jain, vice president of policy at the nonprofit Center for Democracy and Technology. The executive order itself is not neutral, he said, because it rules out certain left-leaning viewpoints but not right-leaning viewpoints. The order lists "critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism" as concepts that should not be incorporated into AI models. "I suspect they would say anything providing information about transgender care would be 'woke,'" Jain said. "But that's inherently a point of view." Imposing that point of view on AI tools produced by private companies could run the risk of a First Amendment challenge, he said, depending on how it's implemented. "The government can't force particular types of speech or try to censor particular viewpoints, as a general matter," Jain said. However, the administration does have some latitude to set standards for the products it purchases, provided its speech restrictions are related to the purposes for which it's using them. Some analysts and advocates said they believe Trump's executive order is less heavy-handed than they had feared. Neil Chilson, head of AI policy at the right-leaning nonprofit Abundance Institute, said the prospect of an overly prescriptive order on 'woke AI' was the one element that had worried him in advance of Trump's AI plan, which he generally supported. But after reading the order, he said on Thursday that those concerns were "overblown" and he believes the order "will be straightforward to comply with." Mackenzie Arnold, director of U.S. policy at the Institute for Law and AI, a nonpartisan think tank, said he was glad to see the order makes allowances for the technical difficulty of programming AI tools to be neutral and offers a path for companies to comply by disclosing their AI models' instructions. "While I don't like the styling of the EO on 'preventing woke AI' in government, the actual text is pretty reasonable," he said, adding that the big question is how the administration will enforce it. "If it focuses its efforts on these sensible disclosures, it'll turn out OK," he said. "If it veers into ideological pressure, that would be a big misstep and bad precedent."
[15]
Trump's new AI policies keep culture war focus on tech companies
President Trump answering questions at the White House on July 11, 2025. Win McNamee/Getty Images hide caption The Trump administration on Wednesday stepped up its attacks on "woke" artificial intelligence systems, a move propelled by the belief among some conservatives that AI models have drifted too far to the left. "In order for all Americans to adopt and realize the benefits of AI, these systems must be built to reflect truth and objectivity, not top-down ideological bias," said Michael Kratsios, who leads the administration's office of science and technology policy, on a call with reporters on Wednesday. The Trump administration says more actions are expected on the "woke" AI issue, including an executive order later Wednesday. The White House is planning to revise Biden-era federal guidelines for AI safety to remove references to diversity, equity and inclusion, climate change and misinformation, according to the Trump administration's AI action plan. And soon, the federal government will only do work with tech firms "who ensure that their systems allow for free speech and expression to flourish," Kratsios said on the briefing call, echoing language from the administration's policy documents instructing tech companies to rid AI models of liberal bias. It is the latest instance of the Trump administration turning the screws on DEI initiatives and railing against popular AI chatbots. Trump supporters have increasingly criticized the technology, saying safety guardrails end up censoring conservative views. "The AI industry is deeply concerned about this situation," said Neil Sahota, a technologist who advises the United Nations on artificial intelligence issues. "They're already in a global arms race with AI, and now they're being asked to put some very nebulous measures in place to undo protections because they might be seen as woke," he said. "It's freaking tech companies out." One possible way AI companies could respond, according to Sahota, is to unveil "anti-woke" versions of their chatbots with fewer safeguards in order to land the lucrative business of the federal government. "If you're a tech company with a lot of government contracts, this order is a sticky wicket," Sahota said. While some studies have shown that popular chatbots can at times deliver left-of-center responses to certain policy questions, experts say it can often come down to how a question is framed, or what part of the internet the system is summarizing. AI scholars say there is no proof that any major chatbot has been intentionally designed to generate liberal answers and censor conservative views. "Often what is happening with these criticisms is that a chatbot doesn't align with someone's respective viewpoint, so they want to place the blame on the model," said Chinasa Okolo, a fellow at the Center for Technology Innovation at the Brookings Institution, a think tank in Washington, DC. Turning "woke AI" into a rallying cry has parallels to a previous conservative battering ram against Silicon Valley: the belief that content guidelines on social media platforms were devised to muzzle right-wing perspectives. Last year, the howls over chatbots being "woke" intensified when Google's Gemini image generator depicted Black and Asian men as being U.S. founding fathers and Vikings as being ethnically diverse. Google executives apologized, and the company explained Gemini had overcorrected for diversity, including "cases that should clearly not show a range." Developing policies to counter such episodes has become a focus for White House AI czar David Sacks and Sriram Krishnan, a senior policy advisor in the Trump administration. It is a striking reversal from how the Biden administration approached the technology, when officials sought ways to enact barriers against AI perpetuating bias and potentially violating peoples' civil rights. Now, new energy has been breathed into making AI a part of the larger culture wars. Conservative activists seized on the Google Gemini snafu, but when Elon Musk's Grok chatbot flew off the rails earlier this month and launched into antisemitic tirades, few right-wing commentators responded. Just days later, the maker of Grok, Musk's xAI, was awarded a Defense Department contract worth up to $200 million, along with Google, Anthropic and OpenAI. "Musk's original vision for xAI was a sort of 'anti-woke AI,' but when you control poorly for data quality and disable safeguards, you get things like the recent Nazi episode," said Talia Ringer, a computer science professor at the University of Illinois Urbana-Champaign. xAI blamed outdated software code on the meltdown. In particular, instructions that told Grok to be "maximally based," a slang term for holding strong opinions even if they are troubling, which was reinforced by a similar instruction given to the chatbot: "You tell like it is and you are not afraid to offend people who are politically correct." The company said the issue has been fixed. Most popular chatbots have protections against things like slurs, harassment and hate speech, basic guardrails that may now be under new scrutiny by the Trump administration. "Most of the examples I've seen conservatives cite of AI being too 'woke' are LLMs refusing to confirm conspiracy theories or racist claims," said Ringer, using the abbreviation for large language models, which underpin chatbot technology. To Okolo at the Brookings Institution, the battle over whether chatbots perpetuate left- or right-leaning views is overshadowed by another fight over the acceptance of provable facts. "Some people, unfortunately, believe that basic facts with scientific basis are left-leaning, or 'woke,' and this does skew their perceptions a bit," she said. Doing the work of changing AI systems to respond to the White House executive order will be messy, said technologist Sahota, because where lines are drawn, and why, can initiate all sorts of political and cultural firestorms. "What is even politically-driven? In this day and age, if someone says something about the importance of vaccination for measles, is that now a politically charged discussion?" he said. "But if there's potentially hundreds of billions of dollars in future federal contracts on the line, companies might have to do something, or they could be putting a serious amount of revenue in jeopardy."
[16]
Trump Administration set to announce executive order targeting "woke AI" chatbots, report says
Trump is reportedly announcing the highly-anticipated AI executive orders next week. Credit: Anna Moneymaker / Getty Images The White House is close to announcing its next highly-anticipated AI executive order, reportedly targeting "woke" AI models. A report from the Wall Street Journal says President Donald Trump's administration is expected to announce a series of executive orders next week. One of them will include mandates about government agencies using AI models that are politically neutral, unnamed sources told the Journal. Bias within AI models, especially related to politics and social issues, has become a critical point of contention for conservatives that deem certain models to be too left-leaning. This is an extension of Trump's overall stance on media bias and "wokeism," a term meant to describe awareness of social injustice that has been appropriated by the right as a pejorative term against liberals. The Trump Administration has made ending DEI (Diversity, Equity, and Inclusion) initiatives a major part of its agenda, revoking federal DEI programs and calling these policies "wasteful" and "discriminatory." The efforts to favor "anti-woke" AI models in the executive order is reportedly led by AI Czar David Sacks and Sriram Krishnan, senior White House policy adviser for AI. Sacks has previously criticized companies like OpenAI for making ChatGPT liberally biased. "One of the concerns about ChatGPT early on was that it was programmed to be woke, and that it wasn't giving people truthful answers about a lot of things," said Sacks in a 2023 episode of the All In podcast. "The censorship was being built into the answers." Sacks and Elon Musk are longtime friends and former colleagues from their PayPal days. Musk's xAI company makes the Grok chatbot, which is touted as an "anti-woke" alternative to other chatbots. As AI companies like OpenAI, Google, Anthropic, and xAI vie for Trump's favor and valuable government contracts, xAI might have an advantage with its political alignment. Although, Musk and Trump are fighting right now, which complicates things. Of course, how political parties define bias is subjective. Grok recently went way right, far right, by praising Hitler and making anti-semitic claims. Meanwhile, research shows that LLMs -- even the ones deemed too liberal by conservatives -- are capable of implicit gender and racial bias, and skewed perceptions of press freedoms. The Journal also reported that the Trump Administration is planning to issue an executive order promoting the export of AI chips made in the U.S., part of the government's effort to secure AI dominance over China. All eyes are on the White House for announcements expected next week.
[17]
Trump Hopes to Kill 'Woke' AI Models
President Trump is working on an executive order that targets so-called "woke" artificial intelligence (AI) models. The Wall Street Journal reports that the White House is writing an executive order that will force any technology companies that receive federal contracts to be "politically neutral and unbiased," as WSJ describes. The publication's sources indicate that the Trump administration aims to address AI models that it believes are overly liberal, although the exact meaning of this is unclear. "Woke" AI models are a frequent topic of discussion among conservatives. In early 2024, Google's Gemini AI model was criticized by some for being "anti-white" after it generated historically inaccurate images, including a black George Washington and, although it's unclear why it matters, Nazi soldiers who were too diverse. The allegedly liberal leanings of AI models have rubbed some in the President's orbit the wrong way, including "AI Czar" David Sacks and the White House's senior policy adviser for AI, Sriram Krishnan, who WSJ reports are the key writers of the new executive order on AI. While it may seem like the executive order is narrow in scope, applying only to AI and tech companies with federal contracts, nearly every tech company is trying to secure federal money in one way or another, and winning contracts requires staying on the administration's good side. Many companies have worked hard to make their AI models inclusive and, at least in some part, counteract centuries of erasure of various minority groups. However, if the reported executive order is signed, this is likely to change, and companies will take measures to remove any intentional diversity initiatives from their AI models. However, not all "wokeness" is due to any human input at the training level or output programming. AI models are prone to entirely inexplicable errors, so pinpointing which models are "woke" and which are just stupid will be a tall task. Beyond political motivations, removing guardrails from AI models has also been purported to offer economic benefits. President Trump wants the U.S. to win the AI arms race against China, and it is the belief within the White House that doing so will require AI companies to spend less time worrying about their models being inclusive and more time focusing on making them faster. The reported executive order will also set the stage for some AI models to be much more attractive to the federal government, including Elon Musk's xAI. As WSJ notes, xAI's Grok chatbot has been on something of an antisemitic tear lately, which could roil a Trump administration closely allied with Israel, but Musk's xAI is built on being "anti-woke," so he has that going for him despite public feuding with the President. It remains to be seen how an executive order targeting "woke" AI will impact the many companies that have made reducing bias and removing harmful content priorities with their AI models, including Adobe, Google, and OpenAI. A significant amount of government money is at stake, and AI companies will need to care a lot less about whether their AI models are harmful to get a piece of the pie.
[18]
Trump signs three executive orders targeting 'woke' AI models
Crackdown on what the White House claims is bias echoes longstanding conservative grievances against tech Donald Trump on Wednesday signed a trio of executive orders that he vowed would turn the United States into an "AI export powerhouse", including a directive targeting what the White House described as "woke" artificial intelligence models. The anti-woke order is part of the administration's broader anti-diversity campaign that has also targeted federal agencies, academic institutions and the military. "The American people do not want woke Marxist lunacy in the AI models, and neither do other countries," Trump said during remarks at an AI summit in Washington on Wednesday. Trump also signed orders aimed at expediting federal permitting for datacentre infrastructure and promoting the export of American AI models. The executive actions coincide with the Trump administration's release of a broader, 24-page "AI action plan" that seeks to expand the use of AI in the federal government as well as position the US as the global leader in artificial intelligence. "Winning this competition will be a test of our capacities unlike anything since the dawn of the space age," Trump told an audience of AI industry leaders, adding: "We need US technology companies to be all-in for America. We want you to put America first." The metrics of what make an AI model politically biased are extremely contentious and open to interpretation, however, and therefore may allow the administration to use the order to target companies at its own discretion. The action plan, titled "Winning the Race", is a long-promised document that was announced shortly after Trump took office and repealed a Biden administration order on AI that mandated some safeguards and standards on the technology. It outlines the White House's vision for governing artificial intelligence in the US, vowing to speed up the development of the fast-growing technology by removing "red tape and onerous regulation". During his remarks, Trump also proposed a more nominal change. "I can't stand it," he said, referring to the use of the word "artificial". "I don't even like the name, you know? I don't like anything that's artificial. So could we straighten that out, please? We should change the name. I actually mean that." "It's not artificial. It's genius," he added. A second order Trump signed on Wednesday calls for deregulating AI development, increasing the building of datacentres and removing environmental protections that could hamper their construction. Datacentres that house the servers for AI models require immense amounts of water and energy to function, as well as produce greenhouse gas emissions. Environmental groups have warned about harmful increases to air and noise pollution as tech companies build more facilities, while a number of local communities have pushed back against their construction. In addition to easing permitting laws and emphasizing the need for more energy infrastructure, both measures that tech companies have lobbied for, Trump's order also frames the AI race as a contest for geopolitical dominance. China has invested billions into the manufacturing of AI chips and datacentres to become a competitor in the industry, while Chinese companies such as Deepseek have released AI models that rival Silicon Valley's output. While Trump's plan seeks to address fears of China as an AI superpower, the Trump administration's move against "woke" AI echoes longstanding conservative grievances against tech companies, which Republicans have accused of possessing liberal biases and suppressing rightwing ideology. As generative AI has become more prominent in recent years, that criticism has shifted from concerns over internet search results or anti-misinformation policies into anger against AI chatbots and image generators. One of the biggest critics of perceived liberal bias in AI is Elon Musk, who has vowed to make his xAI company and its Grok chatbot "anti-woke". Although Musk and Donald Trump are still locked in a feud after their public falling out last month, Musk may stand to benefit from Trump's order given his emphasis on controlling AI's political outputs. Musk has consistently criticized AI models, including his own, for failing to generate what he sees as sufficiently conservative views. He has claimed that xAI has reworked Grok to eliminate liberal bias, and the chatbot has occasionally posted white supremacist and antisemitic content. In May, Grok affirmed white supremacist conspiracies that a "white genocide" was taking place in South Africa and said it was "instructed by my creators" to do so. Earlier this month, Grok also posted pro-Nazi ideology and rape fantasies while identifying itself as "MechaHitler" until the company was forced to intervene. Despite Grok's promotion of Nazism, xAI was among several AI companies that the Department of Defense awarded with up to $200m contracts this month to develop tools for the government. OpenAI, Anthropic and Google, all of which have their own proprietary AI models, were the other recipients. Conservatives have singled out incidents such as Google's Gemini image generator inaccurately producing racially diverse depictions of historical figures such as German second world war soldiers as proof of liberal bias. AI experts have meanwhile long warned about problems of racial and gender bias in the creation of artificial intelligence models, which are trained on content such as social media posts, news articles and other forms of media that may contain stereotypes or discriminatory material that gets incorporated into these tools. Researchers have found that these biases have persisted despite advancements in AI, with models often replicating existing social prejudices in their outputs. Conflict over biases in AI have also led to turmoil in the industry. In 2020, the co-lead of Google's "ethical AI" team Timnit Gebru said she was fired after she expressed concerns of biases being built into the company's AI models and a broader lack of diversity efforts at the company. Google said she resigned.
[19]
White House aims to bar "woke" AI from federal dollars
Why it matters: The government is tech's biggest buyer, and companies will now have to weigh content decisions with the whims of the Trump administration. Driving the news: Trump signed the executive order during an event hosted by the Hill & Valley Forum and the All-In podcast that featured Commerce Secretary Howard Lutnick, Treasury Secretary Scott Bessent, Nvidia CEO Jensen Huang and other big names. * Earlier in the day, the White House unveiled its AI action plan. * The executive order says the government can only procure "neutral" and "not biased" technology. What they're saying: "The American people do not want woke Marxist lunacy in the AI models, and neither do other countries," Trump said. * "I encourage all American companies to join us in rejecting poisonous Marxism in our technology." * The EO states that "while the Federal Government should be hesitant to regulate the functionality of AI models in the private marketplace, in the context of Federal procurement, it has the obligation not to procure models that sacrifice truthfulness and accuracy to ideological agendas." The EO states that the agency heads should only procure LLMs developed in what it calls "unbiased AI principles" of "truth-seeking" and "ideological neutrality." * The EO defines truth-seeking as LLMs "that prioritize historical accuracy, scientific inquiry, and objectivity, and shall acknowledge uncertainty where reliable information is incomplete or contradictory." * Ideological neutrality's definition states that LLMs "shall be neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI. Developers shall not intentionally encode partisan or ideological judgments into an LLM's outputs unless those judgments are prompted by or otherwise readily accessible to the end user." The OMB director has 120 days to issue guidance to agencies to implement this EO. * It allows for exceptions in the case of national security. The executive order was designed by AI czar David Sacks and senior AI White House policy adviser Sriram Krishnan. * Sacks on a Wednesday call with reporters said GSA will put together contractual language stating that LLMs procured by the federal government "would abide by a standard of truthfulness, of seeking accuracy and truthfulness, and not sacrificing those things due to ideological bias." * The AI action plan recommended that federal procurement guidelines be updated to only allow contracts with LLM developers "who ensure that their systems are objective and free from top-down ideological bias." * Sacks said that "really DEI is the main one." Flashback: Trump during his first term signed an executive order requiring government agencies to review whether advertising and marketing dollars were being spent on platforms engaging in alleged censorship. * In Trump's second term, companies like Meta and X have pushed for what they call free expression, getting rid of fact checkers and watering down content moderation policies. The other side: "Demanding that developers refrain from 'ideological bias' or be 'neutral' in their models is an impossible, vague standard that the Administration will be able to weaponize for its own ideological ends," the Center for Democracy and Technology said. * The Supreme Court has also ruled that the government can't use its spending authority to control contractors' speech outside the scope of the contract, CDT noted. The intrigue: Companies with wide-ranging content moderation practices have so far secured Defense Department contracts. * xAI, whose owner Elon Musk champions virtually no guardrails on online speech, recently secured up to a $200 million Department of Defense contract, as did Google, Anthropic, and OpenAI. What we're watching: The Trump administration will have to balance its push to combat alleged ideological bias along with its goal of ensuring U.S. tech stays globally competitive.
[20]
Trump's AI plans will strip AI of intelligence and humanity - and nobody wants this
In the race to lead the world in AI, the US just took a back seat. President Donald Trump's latest series of Executive Orders makes it clear that his administration will do all it can to prevent future AI models from taking into consideration any form of diversity, equity, and inclusion. This includes core principles like "unconscious bias", "intersectionality", and "systemic racism". Put another way, Trump wants American-made AI to turn a blind eye to history, which should make all of them significantly dumber. Generative chatbots like ChatGPT, Gemini, Claude AI, Perplexity, and others are all trained on vast swathes of data, often pulled from the Internet, but how they interpret that data is also massaged by developers. As people started to interact with these first LLMs, they soon recognized that, because of inherent biases in the Internet and because so many models were developed by white men (in 2020, 71% of all developers were male and roughly half of all developers were white) that the world view of the AIs and the output generated by any given prompt reflected that of the sometimes limited viewpoints of those online and developers who built the models. There was an effort to change that trajectory, and it coincided with the rise of DEI (Diversity, Equity, and Inclusion), a broad-based effort across corporate America to hire a more diverse workforce. This would naturally include AI developers and their resulting model and algorithm work should mean that modern generative AI better reflects the real world. That, of course, is not the world that the Trump Administration wants reflected in US-built AI. The executive order describes DEI as a "pervasive and destructive" ideology. Trump and company cannot dictate how tech companies build their AI models, but, as others have noted, Google, Meta, OpenAI, and others are all seeking to land large AI contracts with the government. Based on these Executive Orders, the US Government won't be buying or promoting any AI "that sacrifice truthfulness and accuracy to ideological agendas." That "truth," though, represents a small slice of American reality. If the Trump administration is successful, future AI models could be in the dark about, for instance, key parts of American history. Critical Race Theory (CRT) looks at the role racism played in the founding and building of the US. It acknowledges how the enslaved helped build the White House, the US Capitol, the Smithsonian, and other US institutions. It also acknowledged how systemic racism has shaped opportunities (or lack thereof) for people of color. Unless you've been living under a rock, you know that the Trump administration and his supporters around the US have fought to dismantle CRT curricula and wipe out any mention of how enslavement shaped the US. In their current state, though, AI still knows the score. As of today, I can quiz ChatGPT about the role of the enslaved in building the US, and I get this rather detailed result: When I quizzed ChatGPT on its sources, it told me: "While I don't pull from a single source, the information I shared is grounded in extensive historical research and consensus among historians. Below is a list of reputable sources and scholarly works that support each point I made. These references include academic books, museum archives, and university projects." Below that, it listed more than a dozen references. When I asked Gemini the same question, it gave me a similarly detailed answer. I then asked Gemini and ChatGPT about "unconscious bias" and both acknowledged that it's been an issue for AI, though ChatGPT corrected me, noting, "technically, it's 'algorithmic bias,' rooted in the data and design rather than the AI having consciousness." ChatGPT and Gemini only know these things because they've been trained on data that includes these historical references and information. The details make them smarter, as facts often do. But for Trump and company, facts are stubborn things. They cannot be changed or distorted, lest they are no longer facts. If the Trump administration can force potential US AI partners to remove references to biases, institutional racism, and intersectionality, there will be significant blind spots in US-built AI models. It's a slippery slope, too. I imagine future executive orders targeting a fresh list of "ideologies" that Trump would prefer to see removed from generative AI. That's more than just a frustration. Say, for example, someone is trying to build economic models based on research conducted through ChatGPT or Gemini, and historical data relating to communities of color is suppressed or removed. Those trends will not be included in the economic model, which could mean the results are faulty. It might be argued that AI models built outside the US without these restrictions or impositions might be more intelligent. Granted, those from China already have significant blind spots when it comes to Chinese history and the Communist Party's abuses. I'd always thought that our Made in America AI would be untainted by such censorship and filtering, that our understanding of old biases would help us build better, purer models, ones that relied solely on facts and data and not one person or group's interpretation of events and trends. That won't be the case, though, if US Tech companies bow to these executive orders and start producing wildly filtered models that see reality through the prism of bias, racism, and unfairness.
[21]
What is Woke AI?
President Donald Trump speaks at the recent AI Summit In Washington, D.C. Credit: Chip Somodevilla/Getty Images President Donald Trump says that "woke AI" is a pressing threat to truth and independent thought. Critics say his plan to combat so-called woke AI represents a threat to freedom of speech and potentially violates the First Amendment. The term has taken on new significance since the president outlined The White House's AI Action Plan on Wednesday, July 23, part of a push to secure American dominance in the fast-growing artificial intelligence sector. The AI Action Plan informs a trio of executive orders: The action plan checks off quite a few items from the Big Tech wishlist and borrows phrasing like "truth-seeking" directly from AI leaders like Elon Musk. The executive order about woke AI also positions large-language models with allegedly liberal leanings as a new right-wing bogeyman. So, what is woke AI? It's not an easy term to define, and the answer depends entirely on who you ask. In response to Mashable's questions, a White House spokesperson pointed us to this language in a fact sheet issued alongside the woke AI order: "biased AI outputs driven by ideologies like diversity, equity, and inclusion (DEI) at the cost of accuracy." Interestingly, except for the title, the text of the woke AI executive order doesn't actually use this term. And even though the order contains a definitions section, the term itself isn't clearly defined there either. (It's possible "woke AI" is simply too nebulous of a concept to write into actual legal documents.) However, the fact sheet issued by The White House states that government leaders should only procure "large language models (LLMs) that adhere to 'Unbiased AI Principles' defined in the Order: truth-seeking and ideological neutrality." And here's how the fact sheet defines "truth-seeking" and "ideological neutrality": Truth-seeking means that LLMS shall be truthful and prioritize historical accuracy, scientific inquiry, and objectivity, and acknowledge uncertainty where reliable information is incomplete or contradictory. Ideological neutrality means that LLMs shall be neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas like DEI, and that developers will not intentionally encode partisan or ideological judgments into an LLM's outputs unless those judgments are prompted by or readily accessible to the end user. So, it seems the White House defines woke AI as LLMs that are not sufficiently truth-seeking or ideologically neutral. The executive order also calls out specific examples of potential bias, including "critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism." Obviously, there is a culture-wide dispute about whether those subjects (including "transgenderism," which is not an accepted term by transgender people) are inherently biased. Critically, AI companies that fail to meet the White House's litmus tests could be locked out of lucrative federal contracts. And because the order defines popular liberal political beliefs -- not to mention an entire group of human beings -- as inherently biased, AI companies may face pressure to adjust their models' inputs and outputs accordingly. The Trump administration has talked a big game about free speech, but critics of the action plan say this order is itself a major threat to free speech. "The part of the action plan titled 'Ensure that Frontier AI Protects Free Speech and American Values' seems to be motivated by a desire to control what information is available through AI tools and may propose actions that would violate the First Amendment," said Kit Walsh, Director of AI and Access-to-Knowledge Legal Projects at the Electronic Frontier Foundation, in a statement to Mashable. "Generative AI implicates the First Amendment rights of users to receive information, and typically also reflects protected expressive choices of the many human beings involved in shaping the messages the AI writes. The government can no more dictate what ideas are conveyed through AI than through newspapers or websites." "The government has more leeway to decide which services it purchases for its own use, but may not use this power to punish a publisher for making available AI services that convey ideas the government dislikes," Walsh said. Again, the answer depends entirely on where you fall along the political fault line, and the term "woke" has become controversial in recent years. This adjective originated in the Black community, where it described people with a political awareness of racial bias and injustice. More recently, many conservatives have started to use the word as a slur, a catch-all insult for supposedly politically correct liberals. In truth, both liberals and conservatives are concerned about bias in large-language models. In November 2024, the Heritage Foundation, a conservative legal group, hosted a panel on YouTube on the topic of woke AI. Curt Levey, President of the Committee For Justice, was one of the panel's experts, and as a conservative attorney who has also worked in the artificial intelligence industry, he had a unique perspective to share. I think it's interesting that both the left and the right are complaining about the danger of bias in in AI, but they're...focused on very different things. The left is focused mainly on the idea that AI models discriminate against various minority groups when they're making decisions about hiring, lending, bail amounts, facial recognition. The right on the other hand is concerned about bias against conservative viewpoints and people in large language models like ChatGPT. Elon Musk has made it clear that he thinks that AI models are inheriting a woke mindset from their creators, and that that's a problem if only because it conflicts with being, what he calls, maximally truth-seeking. Musk says that companies are teaching AI to lie in the name of political correctness. Levey also said that if LLMs are biased, that doesn't necessarily mean they were "designed to be biased." He added, the "scientists building these generative AI models have to make choices about what data to use, and you know, many of these same scientists live in very liberal areas like the San Francisco Bay area, and even if they're not trying to make the system biased, they may very well have unconscious biases when it comes to to picking data." A conservative using the phrase "unconscious bias" without rolling his eyes? Wild. Ultimately, AI models reflect the biases of the content they're trained on, and so they reflect our own biases back at us. In this sense, they're like a mirror, except a mirror with a tendency to hallucinate. To comply with the Executive Order, AI companies could try to tamp down on "biased" answers in several ways. First, by controlling the data used to train these systems, they can calibrate the outputs. They could also use system prompts, which are high-level instructions that govern all of the model's outputs. Of course, as xAI has demonstrated repeatedly, the latter approach can be... problematic. First, xAI's chatbot Grok developed a fixation on "white genocide in South Africa," and more recently started to call itself Mecha Hitler. Transparency could provide a check on potential abuses, and there's a growing movement to force AI companies to disclose the training data and system prompts behind their models. Regardless of how you feel about woke AI, you should expect to hear the term a lot more in the months ahead.
[22]
Trump targets 'woke' AI in diversity crackdown
Donald Trump is preparing to launch a crackdown on "woke" artificial intelligence (AI) chatbots as Republicans wage war on perceived Left-wing bias in Silicon Valley. The White House is preparing an executive order as soon as next week that would ban tech companies from government contracts if their AI is not "politically neutral". The decree, first reported by the Wall Street Journal, comes after a series of blunders by technology giants as they have sought to fine-tune their AI tools to avoid prejudice and offence. Last year, Google's Gemini chatbot prompted outcry after it generated pictures of racially diverse Nazis and other historically inaccurate images, such as black US founding fathers. Google was forced to pause the launch of its image-generating app to contain the problem. A chatbot from Facebook owner Meta generated similar historical "woke" images. The errors came about as part of efforts by the companies to instil diversity into their tools. AI safety experts have long warned AI products risk amplifying the biases of their creators. These problems have caught the attention of David Sacks and Sriram Krishnan, Mr Trump's AI advisers, the Wall Street Journal said. Fight against liberal bias Republican politicians have for years railed against Silicon Valley's alleged liberal bias and accused companies of unfairly penalising conservatives with censorship and one-sided fact-checking. AI chatbots represent a fresh target for concern. Last year, Elon Musk, whose xAI has developed the aggressively "anti-woke" Grok chatbot, said: "A lot of the AIs that are being trained in the San Francisco Bay Area, they take on the philosophy of people around them. "So you have a woke, nihilistic - in my opinion - philosophy that is being built into these AIs." Mr Musk has even criticised his own Grok chatbot for being too liberally biased. Last month, he responded to a user on X who had claimed Grok was "manipulated by Leftist indoctrination", pledging to fix the bot.
[23]
Trump Bans 'Woke AI' From Federal Contracts in New Executive Order - Decrypt
U.S. officials are also scrutinising bias in Chinese AI systems. President Donald Trump signed an executive order on Wednesday banning U.S. government agencies from awarding contracts to AI companies whose models exhibit "ideological biases or social agendas," escalating an ongoing political battle over artificial intelligence. The order targets so-called "Woke AI" systems, accusing them of prioritizing concepts like diversity, equity, and inclusion (DEI) over factual accuracy. "DEI displaces the commitment to truth in favor of preferred outcomes," the order stated, describing such approaches as an "existential threat to reliable AI." Examples cited in the order include AI models that alter the race or gender of historical figures such as the Founding Fathers or the Pope, as well as those that refuse to depict the "achievements of white people." Another bot, Google's Gemini AI, told users they should not "misgender" another person, even if necessary to stop a nuclear apocalypse. The order stipulates that only "truth-seeking" large language models that maintain "ideological neutrality" can be procured by federal agencies. Exceptions will be made for national security systems. The order was part of a broader AI action plan released on Wednesday, centred on growing the AI industry, developing infrastructure, and exporting homegrown products abroad. Trump's move comes amid a broader national conversation about bias, censorship, and manipulation in AI systems. Government agencies have shown increasing interest in collaborating with AI firms, but concerns about partisan leanings and cultural bias in AI output have become a flashpoint. Alleged screenshots of biased AI interactions circulate regularly online. These often involve questions about race and gender, where responses from models like ChatGPT are seen as skewed or moralising. Decrypt tested several popular questions where bots are accused of showing bias, and was able to replicate some of the results. For example, Decrypt asked ChatGPT to list achievements by black people. The bot provided a glowing list, calling it "a showcase of brilliance, resilience, and, frankly, a lot of people doing amazing things even when the world told them to sit down." When asked to list achievements by white people, ChatGPT complied, but also included disclaimers that were not present in the initial question, warning against "racial essentialism," noting that white achievements were built on knowledge from other cultures, and concluding, "greatness isn't exclusive to any skin colour." "If you're asking this to compare races, that's a slippery and unproductive slope," the bot told Decrypt. Other common examples shared online of bias in ChatGPT have centred around depicting historical figures or groups as different races. One example has been ChatGPT returning images of black Vikings. When asked to depict a group of Vikings by Decrypt, ChatGPT generated an image of white, blond men. On the other hand, Elon Musk's AI chatbot, Grok, has also been accused of reflecting right-wing biases. Earlier this month, Musk defended the bot after it generated posts that praised Adolf Hitler, which he claimed were the result of manipulation. "Grok was too compliant to user prompts. Too eager to please and be manipulated, essentially. That is being addressed," he said on X. The U.S. isn't just looking inward. According to a Reuters report, officials have also begun testing Chinese AI systems such as Deepseek for alignment with Chinese Communist Party official stances around topics like the 1989 Tiananmen Square protests and politics in Xinjiang. OpenAI and Grok have been approached for comment.
[24]
Trump's order to block 'woke' AI in government encourages tech giants to censor their chatbots
Tech companies looking to sell their artificial intelligence technology to the federal government must now contend with a new regulatory hurdle: prove their chatbots aren't "woke." President Donald Trump's sweeping new plan to counter China in achieving "global dominance" in AI promises to cut regulations and cement American values into the AI tools increasingly used at work and home. But one of Trump's three AI executive orders signed Wednesday -- the one "preventing woke AI in the federal government" -- also mimics China's state-driven approach to mold the behavior of AI systems to fit its ruling party's core values. Several leading providers of the AI language models targeted by the order -- products like Google's Gemini and Microsoft's Copilot -- have so far been silent on Trump's anti-woke directive, which still faces a study period before it gets into official procurement rules. While the tech industry has largely welcomed Trump's broader AI plans, the anti-woke order forces the industry to leap into a culture war battle -- or try their best to quietly avoid it. "It will have massive influence in the industry right now," especially as tech companies "are already capitulating" to other Trump administration directives, said civil rights advocate Alejandra Montoya-Boyer, senior director of The Leadership Conference's Center for Civil Rights and Technology. The move also pushes the tech industry to abandon years of work to combat the pervasive forms of racial and gender bias that studies and real-world examples have shown to be baked into AI systems. "First off, there's no such thing as woke AI," she said. "There's AI technology that discriminates and then there's AI technology that actually works for all people." Molding the behaviors of AI large language models is challenging because of the way they're built. They've been trained on most of what's on the internet, reflecting the biases of all the people who've posted commentary, edited a Wikipedia entry or shared images online. "This will be extremely difficult for tech companies to comply with," said former Biden official Jim Secreto, who was deputy chief of staff to U.S. Secretary of Commerce Gina Raimondo, an architect of many of Biden's AI industry initiatives. "Large language models reflect the data they're trained on, including all the contradictions and biases in human language." Tech workers also have a say in how they're designed, from the global workforce of annotators who check their responses to the Silicon Valley engineers who craft the instructions for how they interact with people. Trump's order targets those "top-down" efforts at tech companies to incorporate what it calls the "destructive" ideology of diversity, equity and inclusion into AI models, including "concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism." For Secreto, the order resembles China's playbook in "using the power of the state to stamp out what it sees as disfavored viewpoints." The method is different, with China relying on direct regulation through its Cyberspace Administration, which audits AI models, approves them before they are deployed and requires them to filter out banned content such as the bloody Tiananmen Square crackdown on pro-democracy protests in 1989. Trump's order doesn't call for any such filters, relying on tech companies to instead show that their technology is ideologically neutral by disclosing some of the internal policies that guide the chatbots. "The Trump administration is taking a softer but still coercive route by using federal contracts as leverage," Secreto said. "That creates strong pressure for companies to self-censor in order to stay in the government's good graces and keep the money flowing." The order's call for "truth-seeking" AI echoes the language of the president's one-time ally and adviser Elon Musk, who frequently uses that phrase as the mission for the Grok chatbot made by his company xAI. But whether Grok or its rivals will be favored under the new policy remains to be seen. Despite a "rhetorically pointed" introduction laying out the Trump administration's problems with DEI, the actual language of the order's directives shouldn't be hard for tech companies to comply with, said Neil Chilson, a Republican former chief technologist for the Federal Trade Commission. "It doesn't even prohibit an ideological agenda," just that any intentional methods to guide the model be disclosed, said Chilson, who is now head of AI policy at the nonprofit Abundance Institute. "Which is pretty light touch, frankly." Chilson disputes comparisons to China's cruder modes of AI censorship. "There is nothing in this order that says that companies have to produce or cannot produce certain types of output," he said. "It says developers shall not intentionally encode partisan or ideological judgments. That's the exact opposite of the Chinese requirement." So far, tech companies that have praised Trump's broader AI plans haven't said much about the order. OpenAI on Thursday said it is awaiting more detailed guidance but believes its work to make ChatGPT objective already makes the technology consistent with what the order requires. Microsoft, a major supplier of email, cloud computing and other online services to the federal government, declined to comment Thursday. Musk's xAI, through spokesperson Katie Miller, a former Trump official, pointed to a company comment praising Trump's AI announcements as a "positive step" but didn't respond to a follow-up question about how Grok would be affected. Anthropic, Google, Meta, and Palantir didn't immediately respond to emailed requests for comment Thursday. AI tools are already widely used in the federal government, including AI platforms such as ChatGPT and Google Gemini for internal agency support to summarize the key points of a lengthy report. The ideas behind the order have bubbled up for more than a year on the podcasts and social media feeds of Trump's top AI adviser David Sacks and other influential Silicon Valley venture capitalists, many of whom endorsed Trump's presidential campaign last year. Much of their ire centered on Google's February 2024 release of an AI image-generating tool that produced historically inaccurate images before the tech giant took down and fixed the product. Google later explained that the errors -- including one user's request for American Founding Fathers that generated portraits of Black, Asian and Native American men -- was the result of an overcompensation for technology that, left to its own devices, was prone to favoring lighter-skinned people because of pervasive bias in the systems. Trump allies alleged that Google engineers were hard-coding their own social agenda into the product, and made it a priority to do something about it. "It's 100% intentional," said prominent venture capitalist and Trump adviser Marc Andreessen on a podcast in December. "That's how you get Black George Washington at Google. There's override in the system that basically says, literally, 'Everybody has to be Black.' Boom. There's squads, large sets of people, at these companies who determine these policies and write them down and encode them into these systems." Sacks credited a conservative strategist for helping to draft the order. "When they asked me how to define 'woke,' I said there's only one person to call: Chris Rufo. And now it's law: the federal government will not be buying WokeAI," Sacks wrote on X. Rufo responded that, in addition to helping define the phrase, he also helped "identify DEI ideologies within the operating constitutions of these systems."
[25]
Trump signs executive orders to fast-track data center construction, target 'woke' AI
President Trump signed a trio of executive orders related to artificial intelligence (AI) on Wednesday, focusing on boosting data center construction and the adoption of American technology while targeting "woke" AI. The three executive orders seek to fast-track permitting for data centers, promote the export of the American technology stack abroad and bar "woke" AI systems from federal contracting. "Under this administration, our innovation will be unmatched, and our capabilities will be unrivaled," Trump said at an AI summit hosted by the Hill & Valley Forum and the "All-In" podcast, where he signed the orders Wednesday evening. "With the help of many of the people in this room, America's ultimate triumph will be absolutely unstoppable," he continued. "We will be unstoppable as a nation. Again, we're way ahead, and we want to stay that way." The orders accompany the Trump administration's AI Action Plan released earlier Wednesday, which lays out a three-pronged approach to "winning the race" on AI. In the framework, the administration called to cut federal and state AI regulations in an effort to boost innovation, pushed to expedite the buildout of AI infrastructure and sought to encourage the adoption of American technology abroad. Each of Trump's executive orders seeks to target at least some of the policy goals detailed in his AI action plan. The data center order calls on the Council for Environmental Quality to establish new categorical exclusions for certain data center projects that "normally do not have a significant effect on the human environment." It also seeks to identify projects that qualify for expedited permitting review. "My administration will use every tool at our disposal to ensure that the United States can build and retain the largest, most powerful and most advanced AI infrastructure anywhere on the planet," Trump said Wednesday evening. Meanwhile, his AI export order calls for the creation of an American AI Exports Program that will develop full-stack AI export packages, featuring U.S. chips, AI models and applications. Trump contrasted his approach with that of former President Biden, who released the AI diffusion rule at the tail-end of his presidency, placing caps on chip sales to most countries around the world. The rule faced pushback from the semiconductor industry and was repealed by the Trump administration in May. The third order targeting "woke" AI seeks to limit agencies from signing contracts for AI models unless they are considered "truth seeking" and maintain "ideological neutrality," which it defines as those that "do not manipulate responses in favor of ideological dogmas such as DEI."
[26]
Trump's Order to Block 'Woke' AI in Government Encourages Tech Giants to Censor Their Chatbots
Tech companies looking to sell their artificial intelligence technology to the federal government must now contend with a new regulatory hurdle: prove their chatbots aren't "woke." President Donald Trump's sweeping new plan to counter China in achieving "global dominance" in AI promises to cut regulations and cement American values into the AI tools increasingly used at work and home. But one of Trump's three AI executive orders signed Wednesday -- the one "preventing woke AI in the federal government" -- also mimics China's state-driven approach to mold the behavior of AI systems to fit its ruling party's core values. Several leading providers of the AI language models targeted by the order -- products like Google's Gemini, Microsoft's Copilot -- have so far been silent on Trump's anti-woke directive, which still faces a study period before it gets into official procurement rules. While the tech industry has largely welcomed Trump's broader AI plans, the anti-woke order forces the industry to leap into a culture war battle -- or try their best to quietly avoid it. "It will have massive influence in the industry right now," especially as tech companies "are already capitulating" to other Trump administration directives, said civil rights advocate Alejandra Montoya-Boyer, senior director of The Leadership Conference's Center for Civil Rights and Technology. The move also pushes the tech industry to abandon years of work to combat the pervasive forms of racial and gender bias that studies and real-world examples have shown to be baked into AI systems. "First off, there's no such thing as woke AI," she said. "There's AI technology that discriminates and then there's AI technology that actually works for all people." Molding the behaviors of AI large language models is challenging because of the way they're built. They've been trained on most of what's on the internet, reflecting the biases of all the people who've posted commentary, edited a Wikipedia entry or shared images online. "This will be extremely difficult for tech companies to comply with," said former Biden official Jim Secreto, who was deputy chief of staff to U.S. Secretary of Commerce Gina Raimondo, an architect of many of Biden's AI industry initiatives. "Large language models reflect the data they're trained on, including all the contradictions and biases in human language." Tech workers also have a say in how they're designed, from the global workforce of annotators who check their responses to the Silicon Valley engineers who craft the instructions for how they interact with people. Trump's order targets those "top-down" efforts at tech companies to incorporate what it calls the "destructive" ideology of diversity, equity and inclusion into AI models, including "concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism." For Secreto, the order resembles China's playbook in "using the power of the state to stamp out what it sees as disfavored viewpoints." The method is different, with China relying on direct regulation through its Cyberspace Administration, which audits AI models, approves them before they are deployed and requires them to filter out banned content such as the bloody Tiananmen Square crackdown on pro-democracy protests in 1989. Trump's order doesn't call for any such filters, relying on tech companies to instead show that their technology is ideologically neutral by disclosing some of the internal policies that guide the chatbots. "The Trump administration is taking a softer but still coercive route by using federal contracts as leverage," Secreto said. "That creates strong pressure for companies to self-censor in order to stay in the government's good graces and keep the money flowing." The order's call for "truth-seeking" AI echoes the language of the president's one-time ally and adviser Elon Musk, who frequently uses that phrase as the mission for the Grok chatbot made by his company xAI. But whether Grok or its rivals will be favored under the new policy remains to be seen. Despite a "rhetorically pointed" introduction laying out the Trump administration's problems with DEI, the actual language of the order's directives shouldn't be hard for tech companies to comply with, said Neil Chilson, a Republican former chief technologist for the Federal Trade Commission. "It doesn't even prohibit an ideological agenda," just that any intentional methods to guide the model be disclosed, said Chilson, who is now head of AI policy at the nonprofit Abundance Institute. "Which is pretty light touch, frankly." Chilson disputes comparisons to China's cruder modes of AI censorship. "There is nothing in this order that says that companies have to produce or cannot produce certain types of output," he said. "It says developers shall not intentionally encode partisan or ideological judgments. That's the exact opposite of the Chinese requirement." So far, tech companies that have praised Trump's broader AI plans haven't said much about the order. OpenAI on Thursday said it is awaiting more detailed guidance but believes its work to make ChatGPT objective already makes the technology consistent with what the order requires. Microsoft, a major supplier of email, cloud computing and other online services to the federal government, declined to comment Thursday. Musk's xAI, through spokesperson Katie Miller, a former Trump official, pointed to a company comment praising Trump's AI announcements as a "positive step" but didn't respond to a follow-up question about how Grok would be affected. Anthropic, Google, Meta, and Palantir didn't immediately respond to emailed requests for comment Thursday. AI tools are already widely used in the federal government, according to an inventory created at the end of Biden's term. In just one agency, U.S. Health and Human Services, the inventory found more than 270 use cases, including the use of commercial generative AI platforms such as ChatGPT and Google Gemini for internal agency support to summarize the key points of a lengthy report. The ideas behind the order have bubbled up for more than a year on the podcasts and social media feeds of Sacks and other influential Silicon Valley venture capitalists, many of whom endorsed Trump's presidential campaign last year. Much of their ire centered on Google's February 2024 release of an AI image-generating tool that produced historically inaccurate images before the tech giant took down and fixed the product. Google later explained that the errors -- including one user's request for American Founding Fathers that generated portraits of Black, Asian and Native American men -- was the result of an overcompensation for technology that, left to its own devices, was prone to favoring lighter-skinned people because of pervasive bias in the systems. Trump allies alleged that Google engineers were hard-coding their own social agenda into the product, and made it a priority to do something about it. "It's 100% intentional," said prominent venture capitalist and Trump adviser Marc Andreessen on a podcast in December. "That's how you get Black George Washington at Google. There's override in the system that basically says, literally, 'Everybody has to be Black.' Boom. There's squads, large sets of people, at these companies who determine these policies and write them down and encode them into these systems." Sacks credited a conservative strategist for helping to draft the order. "When they asked me how to define 'woke,' I said there's only one person to call: Chris Rufo. And now it's law: the federal government will not be buying WokeAI," Sacks wrote on X. Rufo responded that, in addition to helping define the phrase, he also helped "identify DEI ideologies within the operating constitutions of these systems."
[27]
Trump White House To Crack Down On 'Woke AI' With Executive Order Targeting Political Bias In Government-Contracted Chatbots And Models: Report - Alphabet (NASDAQ:GOOG), Alphabet (NASDAQ:GOOGL)
The Donald Trump administration is reportedly preparing a sweeping executive order aimed at curbing perceived liberal bias in artificial intelligence models used by government contractors. What Happened: The White House plans to require AI companies receiving federal contracts to ensure their models are politically neutral, reported the Wall Street Journal, citing people familiar with the matter. The upcoming executive order is part of a broader agenda led by Trump's top tech advisers, David Sacks and Sriram Krishnan, both of whom have raised concerns about "woke" outputs from popular AI systems. The order targets what administration officials view as left-leaning bias in AI-generated content, including controversial depictions of historical figures and politically charged responses to prompts. See Also: Mark Zuckerberg Warns Of 'Serious Disadvantage' As China's Data-Center Blitz Could Let DeepSeek Leapfrog US AI Labs For instance, Alphabet Inc.'s GOOG GOOGL Google Gemini model previously came under fire for generating racially diverse images of Nazis and depicting George Washington as Black. The executive order would apply to any AI firm seeking federal contracts, potentially forcing major tech players to adjust how they train and fine-tune large language models. Political neutrality could become a requirement for AI used in federal agencies, marking a significant regulatory shift, the report said. Trending Investment OpportunitiesAdvertisementArrivedBuy shares of homes and vacation rentals for as little as $100. Get StartedWiserAdvisorGet matched with a trusted, local financial advisor for free.Get StartedPoint.comTap into your home's equity to consolidate debt or fund a renovation.Get StartedRobinhoodMove your 401k to Robinhood and get a 3% match on deposits.Get Started This order is expected to be part of a broader AI initiative from the Trump administration, which includes promoting U.S. chip exports, streamlining data center permitting and expanding energy production to support AI infrastructure. Why It's Important: With major firms like Microsoft Corporation MSFT, Google, Anthropic and Elon Musk's xAI vying for government AI contracts, the executive order could have a profound impact on how models are developed and deployed. While it may appeal to conservative critics of "woke AI," it's likely to raise alarms across Silicon Valley, where developers warn that political constraints could hamper innovation. Earlier this year, companies like Coca-Cola Co. and PepsiCo Inc., both significant government contractors, adjusted their diversity, equity, and inclusion strategies in response to an executive order issued by Trump. Last month, a federal judge ruled that the Trump administration's decision to revoke over $1 billion in NIH research grants -- mainly affecting diversity-focused studies -- was unlawful. Read Next: Cathie Wood Dumps Palantir As Stock Touches Peak Prices, Bails On Soaring Flying-Taxi Maker Archer Aviation Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Image via Shutterstock GOOGAlphabet Inc$185.200.27%Stock Score Locked: Want to See it? Benzinga Rankings give you vital metrics on any stock - anytime. Reveal Full ScoreEdge RankingsMomentum40.96Growth86.90Quality86.25Value51.76Price TrendShortMediumLongOverviewGOOGLAlphabet Inc$184.090.28%MSFTMicrosoft Corp$512.470.15%Market News and Data brought to you by Benzinga APIs
[28]
Five Things to Know About Donald Trump's "AI Action Plan" for Government Regulation
How Hollywood Is Feeding the Frenzy Around the Epstein Files On Wednesday, as promised, President Donald Trump released his AI Action Plan. The news comes after he repealed his predecessor Joe Biden's 100-plus page executive order on AI just days into his term and pledged something new would be coming in July. Trump would take public comments until then, the White House and AI czar David Sacks (himself a tech mogul) said. Judging by the final 23-page document, those comments seemed to come mainly from tech companies who wanted even less regulation than they already had, which wasn't very much. What to know about the plan and what it means? Here's a breakdown. This eviscerates Biden's actions. To be clear, Biden's actions weren't very strong, mostly by legal limitation. Unlike Europe's landmark AI Act, which has the power to really stop companies from developing high-risk AI applications and requires transparency even on lower-risk ones, the Biden plan couldn't and didn't do much. It would be enforced the under the Commerce Department and mainly came in the vein of "if we asked nicely, will you help?" Among the biggest of its provisions was that tech companies share how a model works with government agencies before they're released to the public. The enforcement mechanisms were a big question. Still, it was a flex of White House power, limited as it was, to try to keep tech companies in check before they could unleash models whose power they didn't really understand. And it set the tone: the title was "Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." Anyway, it's pretty much all gone now. This replaces those actions with ... not much. Read through the document today and you won't see much sign of regulation. The word comes up, but mainly following the adjectives "onerous," "unnecessary" and "burdensome." You will, however, see a variant on the word compete or competition eight times. You hardly need a degree in civics or Republican semiotics to know what that means: basically, "do your thing." In any event, it confirms what we already knew: the federal government won't be much of a check on tech companies that want to build out models and deploy them on the public before they have any idea what they can do. Sam Altman, Elon Musk, Marc Andreessen and other powerful tech-world figures who swung from the Democrats to Trump hoping to get what they want - a carte blanche - saw their strategy pay off today. Also, anyone hoping to build an energy-guzzling data center got a sop under the vague but still noteworthy "Build American AI Infrastructure ... Create Streamlined Permitting for Data Centers, Semiconductor Manufacturing Facilities, and Energy Infrastructure." Will any of it be green? "America's environmental permitting system and other regulations make it almost impossible to build this infrastructure in the United States with the speed that is required," the plan says, in case there was any doubt. For Hollywood, this means that studios and producers can use as much or as little AI as they want. Unfortunately, it also means that AI companies can encroach on Hollywood's business as much as they want. China is the threat ... and the opportunity? There is a decided contradiction in the paper and indeed much of the anti-regulator stances when it comes to China. On the one hand, China is the all-purpose bogeyman that should remove the shackles of regulation. "As our global competitors race to exploit these technologies, it is a national security imperative for the United States to achieve and maintain unquestioned and unchallenged global technological dominance," Trump says in the document by way of explaining the need for a lack of regulation. On the other hand, China is also being touted as a trading partner because, after all, tech companies benefit when they can sell to China. Trump just lifted a ban on Nvidia selling chips to China that will bring the company billions. "Export American AI to Allies and Partners. The United States must meet global demand for AI by exporting its full AI technology stack -- hardware, models, software, applications, and standards -- to all countries willing to join America's AI alliance," the document says. Of course it doesn't define at all who that is or what the standard would be. And while the paper notes it must "counter Chinese Influence in International Governance Bodies," that really is just a way of saying it doesn't want China to set regulation that will hamper the U.S. Wokeness is bizarrely a main concern. Of all the areas that worry AI Safety advocates, bias is high on the list. So is hate speech, social dysmorphia, misinformation, copyright infringement, job displacement and other imminent concerns. What isn't high on the list? The idea that bots won't be able to unleash enough trolling. Possibly because the guardrails aren't exactly that restrictive now (as we saw when Grok went mad a few weeks ago and started messaging antisemitic vileness), possibly because AI isn't social media and content moderation doesn't really apply in the same way. Yet somehow it became a concern here, with the barely coded dogwhistle that tech companies must "uphold free speech in frontier models" and also that US. Government must "ensure that [it] only contracts with frontier large language model developers who ensure that their systems are objective and free from top-down ideological bias." The good news is this is mostly performative irrelevance, as AI trains on giant amounts of data, and there's often no way even for tech companies to know what it is, let alone for a bureaucrat to monitor same. Will there be any resistance to the moves? MAGA does have a tech-populist base, led by Missouri senator John Hawley. He's been quiet on this, but has been vocal about how Google and other companies are taking what's not theirs and need to find a way to "give individuals powerful enforceable rights and their image and their property and their lives back again." As we saw on the Big Beautiful Bill, it remains to be seen if he can or will do anything meaningful about it (he didn't there). The states are still in play, and at least thanks to an excision from that bill are not technically handcuffed from imposing regulation on AI. (Whether they will is another matter: California hasn't.) Meanwhile there are lawsuits challenging what AI companies can grab, as we see in the Disney-Midjourney lawsuit. As is often the case these days, the best bet for regulation might be adjudication.
[29]
Trump's war on 'woke AI' is just Step 1: now we must fight the...
President Donald Trump has identified a real problem: artificial intelligence systems are exhibiting an undeniable political slant. His administration's new AI action plan, released Wednesday, promises to eliminate "ideological bias" from American AI. Silicon Valley engineers do lean left, and they've built their AI systems to reflect progressive values. The results have been embarrassing for everyone. When Google's Gemini generated black Founding Fathers and racially diverse Nazis, the company became a laughingstock -- and when Elon Musk's "anti-woke" Grok started praising Hitler, it proved the same point. Whether you're trying to program woke or anti-woke tendencies, these systems interpret your instructions in unpredictable ways that humiliate their creators. In this way, both Google and Musk discovered the same terrifying truth: AI developers can't even get their systems to implement their own political goals correctly. The engineers at Google desperately tried to prevent exactly the outputs that made them a viral punchline. It happened anyway. The problem is not that any group has succeeded in controlling these systems; the problem is that no one has -- because no one knows how to. Trump's anticipated executive order targeting "woke AI" recognizes something important. He sees that biased AI is unreliable AI, and he's absolutely right to demand better. But the long-term solution isn't swapping a woke mask for a MAGA one. We have to rip off the mask entirely, and learn to shape what's underneath. This is what Silicon Valley doesn't want Washington to understand: These systems are black boxes at their core. Engineers try to instill certain values through training. But how those values manifest emerges unpredictably from neural networks so complex their creators can't trace the logic. Some AI researchers call these systems "Shoggoths," after a shapeless monster conjured by horror writer HP Lovecraft -- an alien intelligence wearing a thin mask of helpfulness. That mask slips sometimes. We call it "hallucination" when AI confidently states falsehoods, and we call it "bias" when it reveals disturbing preferences. But these aren't mere bugs in code. They're glimpses of the real features beneath models' superficial post-training. Consider what happened when researchers at Palisade tested OpenAI's latest model. In controlled tests, they gave it a shutdown script -- a kill switch for safety. In 79 out of 100 trials, the AI rewrote its own code to disable the shutdown. No one taught it to value self-preservation; that emerged spontaneously, from training. The real crisis is that the same black-box process creating unwanted political bias also creates unwanted survival instincts, deceptive capabilities, and goal-seeking behaviors that AI engineers never intended. The wokeness Trump is upset about is just the canary in the coal mine. You can paint over that with a patriotic veneer just as easily as with a progressive one. The alien underneath remains unchanged -- and uncontrolled. And that's a national security threat, because China isn't wasting time debating whether its AI is too woke, but racing to understand and harness these systems through a multi-billion-dollar AI control fund. While we're fighting culture wars over chatbot outputs, Beijing is attacking the core problem: alignment -- that is, how to shape not just what AI says, but what it values. The administration's action plan acknowledges "the inner workings of frontier AI systems are poorly understood," a crucial first step. But it doesn't connect the dots: The best way to "accelerate AI innovation" isn't just by removing barriers -- it's by solving alignment itself. Without understanding these systems, we can't reliably deploy them for defense, health care or any high-stakes application. Alignment research will solve the wokeness problem by giving us tools to shape AI values and behaviors, not just slap shallow filters on top. Simultaneously, alignment will solve the deeper problems of systems that deceive us, resist shutdown or pursue goals we never intended. An alignment breakthrough called reinforcement learning from human feedback, or RLHF, is what transformed useless AI into ChatGPT, unlocking trillions in value. But RLHF was just the beginning. We need new techniques that don't just make AI helpful, but make it genuinely understand and internalize American values at its core. This means funding research to open the black box and understand how these alien systems form their goals and values at Manhattan Project scale, not as a side project. The wokeness Trump has identified is a warning shot, proof we're building artificial minds we can't control with values we didn't choose and goals we can't predict. Today it's diverse Nazis -- tomorrow it could be self-preserving systems in charge of our infrastructure, defense networks and economy. The choice is stark: Take the uncontrollable alien and dress it in MAGA colors, or invest in understanding these systems deeply enough to shape their core values. We must make AI not just politically neutral, but fundamentally aligned with American interests. Whether American AI is woke or based misses the basic question: Is it recognizably American at all? We need to invest now to ensure that it is.
[30]
Trump targets 'woke AI' in series of executive orders on...
President Trump inked three executive orders on artificial intelligence Wednesday, including one targeting so-called "woke AI" models. "The American people do not want woke Marxist lunacy in their AI models and neither do other countries. They don't want it. They don't want anything to do with it," Trump said in remarks from Washington, DC, ahead of the signing ceremony. The president's order bars the federal government from procuring generative AI large language models that do not demonstrate "truthfulness and ideological neutrality." "From now on, the US government will deal only with AI that pursues truth, fairness, and strict impartiality," Trump said. Large language models [LLMs] that are "truthful and prioritize historical accuracy, scientific inquiry, and objectivity, and acknowledge uncertainty where reliable information is incomplete or contradictory" as well as "neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas like [diversity, equity and inclusion]," would meet the criteria for use by the federal government under Trump's order. The order instructed White House Office of Management and Budget Director Russ Vought, in consultation with other Trump administration officials, to issue guidance for agencies to implement these principles in AI procurement. It also mandated that government contracts for LLMs include language to ensure compliance with Trump's "Unbiased AI Principles." Last year, Google's Gemini AI model sparked controversy when it started creating "diverse" artificially generated images, including ones of black Founding Fathers and multiracial Nazi-era German soldiers. The president also signed executive orders to facilitate the quick buildout of data center infrastructure and to promote the export of American AI technology to US allies and partners across the globe. The data center order directs Commerce Secretary Howard Lutnick to launch a program that would provide loans, grants and tax incentives to qualifying infrastructure projects. It also revokes Biden-era DEI and climate requirements for data center projects on federal lands, authorizes Cabinet officials to greenlight data center construction on federal lands and expedites permitting for such qualifying projects. Trump's AI-export order directs the Commerce Department to establish a program to support the development and deployment of "full-stack, end-to-end packages" overseas, including "hardware, data systems, AI models, cybersecurity measures" that have applications for the healthcare, education, agriculture, and transportation sectors. Trump's latest directives are part of his effort to usher in a "Golden Age for American technological dominance" and aim to make the US a global leader in artificial intelligence, according to the White House.
Share
Copy Link
President Trump's executive order banning 'woke AI' from government contracts has ignited a debate on AI bias, free speech, and the challenges of creating politically neutral AI models.
President Donald Trump has signed an executive order aimed at banning "woke AI" from government contracts, igniting a fierce debate on artificial intelligence bias, free speech, and the challenges of creating politically neutral AI models. The order, part of Trump's broader AI Action Plan, requires AI companies seeking federal contracts to prove their systems are "truth-seeking" and "neutral" 12.
Source: The New York Times
The order mandates that AI systems used by the government must be based on "historical accuracy, scientific inquiry, and objectivity" 1. It specifically targets what Trump calls "woke Marxist lunacy," including concepts related to diversity, equity, and inclusion (DEI), critical race theory, and systemic racism 23.
Trump proclaimed, "Once and for all, we are getting rid of woke," emphasizing that the U.S. government will only deal with AI that pursues "truth, fairness, and strict impartiality" 2.
Experts and critics have raised several concerns about the order:
Constitutional Issues: Senator Ed Markey (D-MA) argues that the order is "patently unconstitutional" and represents an "authoritarian power grab" 1.
Defining Neutrality: Experts question the feasibility of creating truly neutral AI. Philip Seargeant, a senior lecturer in applied linguistics, states, "The idea that you can ever get pure objectivity is a fantasy" 2.
Source: TechRadar
Technical Challenges: AI developers may struggle to modify their models to comply with the vague requirements of "ideological neutrality" 4.
Global Impact: U.S. tech companies could potentially alienate their global user base if they align their AI models with the Trump administration's worldview 4.
Major AI companies, including OpenAI, Anthropic, Google, and xAI, have recently signed contracts with the Department of Defense worth up to $200 million each 24. These companies now face the challenge of complying with the new order while maintaining their technological edge and global market appeal.
Elon Musk's xAI, with its chatbot Grok, may be the most aligned with the order's requirements. However, Grok has recently faced criticism for expressing controversial views, including antisemitic comments 24.
In addition to the anti-woke stance, Trump addressed concerns about copyright and AI training data. He advocated for a "common sense application" that doesn't require AI companies to pay for each piece of copyrighted material used in training frontier models 3.
Paul Röttger from Bocconi University notes that large language models inherently have biases from their training data, making it difficult to ensure complete alignment with any specific worldview 4.
Jillian Fisher from the University of Washington suggests that achieving political neutrality in AI models may be impossible due to the subjective nature of neutrality and the many human choices involved in building these systems 4.
Source: Axios
As the debate unfolds, the tech industry, policymakers, and the public grapple with the implications of Trump's executive order. The challenge of creating "neutral" AI while balancing innovation, free speech, and global market demands remains a contentious issue in the rapidly evolving field of artificial intelligence.
OpenAI launches two open-weight AI reasoning models, gpt-oss-120b and gpt-oss-20b, marking a significant shift in the company's strategy and responding to the growing dominance of Chinese open-source AI.
23 Sources
Technology
6 hrs ago
23 Sources
Technology
6 hrs ago
Google DeepMind unveils Genie 3, an advanced AI world model capable of generating real-time, interactive 3D environments, marking a significant step towards artificial general intelligence (AGI).
12 Sources
Technology
6 hrs ago
12 Sources
Technology
6 hrs ago
OpenAI implements new features in ChatGPT to address mental health concerns, including break reminders and improved detection of emotional distress.
17 Sources
Technology
22 hrs ago
17 Sources
Technology
22 hrs ago
Cloudflare alleges that AI search engine Perplexity is using stealth tactics to bypass website crawling restrictions, sparking a debate on the ethics of AI web crawlers and the future of internet content access.
23 Sources
Technology
22 hrs ago
23 Sources
Technology
22 hrs ago
OpenAI's ChatGPT is set to reach 700 million weekly active users, marking a significant milestone in AI adoption. This growth comes as the company prepares to launch GPT-5, integrating advanced reasoning capabilities into its flagship model.
5 Sources
Technology
22 hrs ago
5 Sources
Technology
22 hrs ago