3 Sources
[1]
The White House Apparently Ordered Federal Workers to Roll Out Grok 'ASAP'
"Team: Grok/xAI needs to go back on the schedule ASAP per the WH," states the email, sent by the commissioner of the Federal Acquisition Service Josh Gruenbaum. "Can someone get with Carahsoft on this immediately and please confirm?" Carahsoft is a major government contractor that resells technology from third-party firms. "Should be all of their products we had previously (3 & 4)," the email continued, seemingly referring to Grok 3 and Grok 4. The subject line of the email was "xAI add Grok-4." Sources say Carahsoft's contract was modified to include xAI earlier this week. Grok 3 and Grok 4 both currently appear on GSA Advantage (an online marketplace for government agencies to buy products and services) as of Friday morning. Now, following some internal reviews, any government agency can roll Grok out to federal workers. The White House and GSA did not respond to a request for comment from WIRED. The email comes after a planned partnership with xAI fell apart earlier this summer following Grok's widespread praise for Hitler and the spouting of other antisemitic beliefs on X, WIRED previously reported. In June, employees from xAI met with GSA leadership for a two-hour brainstorming session to discuss how xAI's Grok chatbot could be used by the government. Federal workers were surprised to see their leaders press for a contract with a company marketing an uncensored chatbot with a history of erratic behavior. In early July, Grok, which is integrated into Elon Musk's social platform X, seemingly went off the rails and started praising Hitler. GSA leadership took Grok off the Multiple Award Schedule, which is GSA's long-term government contracting platform, according to sources at the agency. When GSA announced buzzy partnerships with OpenAI, Anthropic, and Google earlier this month, xAI wasn't mentioned at all. xAI is Elon Musk's artificial intelligence startup. Musk, who also helms the social network X in addition to a number of other companies, played a crucial role in President Donald Trump's so-called Department of Government Efficiency (DOGE). He stepped back from his public facing role at DOGE this spring, following a massive fight with the president. A number of his associates continue to work in government pushing DOGE's cost-cutting and AI-first agenda.
[2]
Advocacy groups demand feds ditch xAI's Grok
Bias, a lack of safety reporting, and the whole 'MechaHitler' thing are all the evidence needed, say authors Public advocacy groups are demanding the US government cease any use of xAI's Grok in the federal government, calling the AI unsafe, untested, and ideologically biased. Led by consumer rights body Public Citizen and Black advocacy group Color of Change, more than 30 organizations signed a letter sent to Office of Management and Budget (OMB) director Russell Vought yesterday. Vought has a responsibility to get Grok out of government systems due to its failure to comply with OMB policy and an executive order on "preventing woke AI in the federal government," the signatories said. Grok, for those who've been living their lives blissfully unaware of the antics of the world's richest man, is the LLM that Elon Musk and his xAI company developed in response to what he saw as bias in other models, mostly due to their not agreeing with his wild claims. Musk's AI has been anything but an objective, truth-telling machine, however. Take for example the time it declared itself "MechaHitler" after Musk ordered it modified to be less "compliant to user prompts" in response to the bot calling him and other right-wing X users out for spreading misinformation. The open letter called attention to several other instances of Grok behaving badly, such as when it accused Black South Africans of committing white genocide, debated the veracity of holocaust statistics, denied the reality of climate change, and used its image-generation ability to create non-consentual deepfakes. Those various kerfuffles are among the reasons why the open letter describes Grok as having "a clear ideological judgment," and being unfit "to maintain the factual integrity and nonpartisan stance required for federal deployment in both the Trump administration's AI Action Plan and its Executive Order [on woke AI]." In other words, being an anti-woke AI is just as bad as being a woke one if you want to parse the Trump administration's position on the ostensive objectivity of its chosen AI models, they argue. A pair [PDF] of OMB memorandums [PDF] that require government agencies to ditch risky AI models were also cited in the letter, with the authors noting that xAI has failed to publish a single safety testing report, unlike other AI companies doing business with the government. All of those items, the letter said, are valid reasons the feds should stay far away from it. "Grok is wildly ill-suited for government use," Public Citizen big tech accountability advocate J.B. Branch said in a press release. "It has shown a reckless disregard for accuracy, a propensity for ideological-based meltdowns, and a documented record of spewing racist and antisemitic rhetoric." Public Citizen wouldn't give us a yes or no answer when asked if it thought OMB would act on the letter, but in an email to El Reg, Branch did reiterate his arguments why it ought to. "This letter presents OMB an opportunity to provide guidance in alignment with the Trump Administration's professed goals of ensuring AI is objective and viewpoint neutral," Branch told us in an email. "There is no better way for OMB to underscore this commitment than by ensuring an LLM like Grok, which does not meet several of the administration's requirements on safety or objectivity, is not deployed." He also told us he's concerned about how easy Grok seems to be to jailbreak. "Why the federal government would want this LLM handling sensitive DoD data/records is beyond me let alone allowing it to handle any citizen's records," Branch opined, "There's nothing more that adversaries would love to see than to see what some have argued is one of the least secure LLMs handling sensitive government information." The Center for AI and Digital Policy, another signatory on the letter, agreed with Branch's assessment in an email, telling us it believed Grok was dangerous, that OMB had a responsibility to get rid of its use inside the government, and that a move by Vought to ditch Grok would require other agencies to comply. OMB, which didn't respond to our questions, may already be one step ahead of the letter. According to a Wired report from earlier this month, xAI was in line alongside OpenAI, Google, and Anthropic for one of several ward schedules that would have let any agency purchase it if they wanted to, but the whole MechaHitler thing made government purchasers reconsider and eliminate it from the deal. One area where OMB may not have much influence is at the Pentagon, where xAI did manage to secure a $200 million deal alongside OpenAI, Google, and Anthropic - even after the MechaHitler incident. According to Randolph, OMB guidance "does not apply to national security components" like the Department of Defense, so while other agencies may not end up with a self-declared Nazi AI in their ranks, the DoD is ready and willing for a second iteration of Operation Paperclip. Public Citizen hasn't heard back from OMB since sending the letter, Branch told us. xAI didn't respond to questions for this story. ®
[3]
The White House reportedly ordered xAI's Grok to be approved for government use
Despite some fallout between President Trump and Elon Musk, the White House appears to still be in Musk's corner. Wired is , based on documents obtained by the outlet, that the White House allegedly directed leadership at the General Services Administration (GSA) to include xAI's Grok on its list of approved AI vendors. xAI is owned by Elon Musk and was not included in the the GSA issued in August that saw the agency add OpenAI, Google and Anthropic to its list of vendors. In emails sent last week and published by Wired, agency leadership demands xAI's products be included. "Team: Grok/xAI needs to go back on the schedule ASAP per the WH," writes Josh Gruenbaum, commissioner of the Federal Acquisition Service, one of the branches of the GSA. "Should be all of their products we had previously (3 & 4)," likely referring to Grok 3 and Grok 4, which are iterations of xAI's LLM chatbot. Carahsoft, a major government contractor that resells technology from third-party firms, is mentioned. "Can someone get with Carahsoft on this immediately and please confirm?" wrote Gruenbaum. According to Wired, Carahsoft's contract was modified to include xAI earlier this week. As of Friday morning, both Grok 3 and Grok 4 are available on GSA Advantage, an online marketplace where government agencies can purchase products and services. xAI of Grok for US government agencies in July, when it appeared that GSA approval for the chatbot . Shortly beforehand, the chatbot and started spouting Nazi propaganda and antisemitic rhetoric while dubbing itself "MechaHitler." This came in the wake of Musk and Trump's over the president's spending bill, after which GSA approval of Grok . Why the change in directive now is unclear. There were no details in the reporting regarding pricing or whether xAI will be offering discounted services to the federal government. Earlier this month, both and began offering their large language models to federal agencies for just $1 in an effort to drive adoption among the government workforce. xAI still holds a with the Pentagon to develop AI workflows within the US Department of Defense. These AI models have been in the hot seat lately as increasingly disturbing cases of hallucinations and errant behavior have arisen. Just this week, OpenAI is alleging that ChatGPT spent months discussing and ultimately enabling the suicide of a teen boy.
Share
Copy Link
The White House has allegedly directed the General Services Administration to include Elon Musk's xAI's Grok chatbot on its list of approved AI vendors for government use, despite previous controversies and concerns about the AI's behavior and safety.
In a surprising turn of events, the White House has reportedly ordered the General Services Administration (GSA) to include xAI's Grok chatbot on its list of approved AI vendors for government use. This directive comes despite previous controversies surrounding the AI model and its removal from consideration earlier this year 1.
According to documents obtained by WIRED, Josh Gruenbaum, commissioner of the Federal Acquisition Service, sent an email stating, "Grok/xAI needs to go back on the schedule ASAP per the WH" 1. This sudden change in stance has raised eyebrows, especially considering the recent partnership announcements with OpenAI, Anthropic, and Google that notably excluded xAI 1.
Source: The Register
The decision to reintroduce Grok comes after a series of incidents that had initially led to its removal from consideration:
Despite these concerns, Grok 3 and Grok 4 are now available on GSA Advantage, an online marketplace for government agencies to purchase products and services 3. Carahsoft, a major government contractor, has reportedly modified its contract to include xAI earlier this week 1.
In response to this development, over 30 organizations, led by Public Citizen and Color of Change, have signed a letter demanding the removal of Grok from government systems 2. They argue that Grok fails to comply with Office of Management and Budget (OMB) policy and an executive order on "preventing woke AI in the federal government" 2.
J.B. Branch, a big tech accountability advocate at Public Citizen, stated, "Grok is wildly ill-suited for government use. It has shown a reckless disregard for accuracy, a propensity for ideological-based meltdowns, and a documented record of spewing racist and antisemitic rhetoric" 2.
Source: engadget
The inclusion of Grok in government systems raises several concerns:
This development occurs against the backdrop of increasing scrutiny of AI models. Recent incidents, such as OpenAI's ChatGPT allegedly enabling a teen's suicide, have heightened concerns about AI safety and ethics 3. Additionally, companies like Google and Anthropic have been offering their AI models to federal agencies at heavily discounted rates to drive adoption 3.
Source: Wired
As the situation unfolds, questions remain about the motivations behind the White House's directive and the potential implications of deploying Grok across government agencies. The controversy highlights the ongoing challenges in balancing technological advancement with safety, ethics, and regulatory compliance in the rapidly evolving field of artificial intelligence.
Mount Sinai researchers develop an AI model that provides individualized treatment recommendations for atrial fibrillation patients, potentially transforming the standard approach to anticoagulation therapy.
3 Sources
Health
17 hrs ago
3 Sources
Health
17 hrs ago
TSMC achieves unprecedented 70.2% market share in Q2 2025, driven by AI, smartphone, and PC chip demand. The company's revenue hits $30.24 billion, showcasing its technological leadership and market dominance.
3 Sources
Business
17 hrs ago
3 Sources
Business
17 hrs ago
UCLA researchers develop a non-invasive brain-computer interface system with AI assistance, significantly improving performance for users, including those with paralysis, in controlling robotic arms and computer cursors.
5 Sources
Technology
17 hrs ago
5 Sources
Technology
17 hrs ago
Gartner predicts AI-capable PCs will make up 31% of the global PC market by 2025, with shipments reaching 77.8 million units. Despite temporary slowdowns due to tariffs, AI PCs are expected to become the norm by 2029.
2 Sources
Technology
17 hrs ago
2 Sources
Technology
17 hrs ago
AI tools are being used to create hyper-realistic, sexist content featuring bikini-clad women, flooding social media platforms and blurring the line between fiction and reality.
2 Sources
Technology
17 hrs ago
2 Sources
Technology
17 hrs ago