7 Sources
[1]
Musk's DOGE used Meta's Llama 2 -- not Grok -- for gov't slashing, report says
An outdated Meta AI model was apparently at the center of the Department of Government Efficiency's initial ploy to purge parts of the federal government. Wired reviewed materials showing that affiliates of Elon Musk's DOGE working in the Office of Personnel Management "tested and used Meta's Llama 2 model to review and classify responses from federal workers to the infamous 'Fork in the Road' email that was sent across the government in late January." The "Fork in the Road" memo seemed to copy a memo that Musk sent to Twitter employees, giving federal workers the choice to be "loyal" -- and accept the government's return-to-office policy -- or else resign. At the time, it was rumored that DOGE was feeding government employee data into AI, and Wired confirmed that records indicate Llama 2 was used to sort through responses and see how many employees had resigned. Llama 2 is perhaps best known for being part of another scandal. In November, Chinese researchers used Llama 2 as the foundation for an AI model used by the Chinese military, Reuters reported. Responding to the backlash, Meta told Reuters that the researchers' reliance on a "single" and "outdated" was "unauthorized," then promptly reversed policies banning military uses and opened up its AI models for US national security applications, TechCrunch reported. "We are pleased to confirm that we're making Llama available to US government agencies, including those that are working on defense and national security applications, and private sector partners supporting their work," a Meta blog said. "We're partnering with companies including Accenture, Amazon Web Services, Anduril, Booz Allen, Databricks, Deloitte, IBM, Leidos, Lockheed Martin, Microsoft, Oracle, Palantir, Scale AI, and Snowflake to bring Llama to government agencies." Because Meta's models are open-source, they "can easily be used by the government to support Musk's goals without the company's explicit consent," Wired suggested. It's hard to track where Meta's models may have been deployed in government so far, and it's unclear why DOGE relied on Llama 2 when Meta has made advancements with Llama 3 and 4. Not much is known about DOGE's use of Llama 2. Wired's review of records showed that DOGE deployed the model locally, "meaning it's unlikely to have sent data over the Internet," which was a privacy concern that many government workers expressed. In an April letter sent to Russell Vought, director of the Office of Management and Budget, more than 40 lawmakers demanded a probe into DOGE's AI use, which, they warned -- alongside "serious security risks" -- could "have the potential to undermine successful and appropriate AI adoption." That letter called out a DOGE staffer and former SpaceX employee who supposedly used Musk's xAI Grok-2 model to create an "AI assistant," as well as the use of a chatbot named "GSAi" -- "based on Anthropic and Meta models" -- to analyze contract and procurement data. DOGE has also been linked to a software called AutoRIF that supercharges mass firings across the government. In particular, the letter emphasized the "major concerns about security" swirling DOGE's use of "AI systems to analyze emails from a large portion of the two million person federal workforce describing their previous week's accomplishments," which they said lacked transparency. Those emails came weeks after the "Fork in the Road" emails, Wired noted, asking workers to outline weekly accomplishments in five bullet points. Workers fretted over responses, worried that DOGE might be asking for sensitive information without security clearances, Wired reported. Wired could not confirm if Llama 2 was also used to parse these email responses, but federal workers told Wired that if DOGE was "smart," then they'd likely "reuse their code" from the "Fork in the Road" email experiment. Why didn't DOGE use Grok? It seems that Grok, Musk's AI model, wasn't available for DOGE's task because it was only available as a proprietary model in January. Moving forward, DOGE may rely more frequently on Grok, Wired reported, as Microsoft announced it would start hosting xAI's Grok 3 models in its Azure AI Foundry this week, The Verge reported, which opens the models up for more uses. In their letter, lawmakers urged Vought to investigate Musk's conflicts of interest, while warning of potential data breaches and declaring that AI, as DOGE had used it, was not ready for government. "Without proper protections, feeding sensitive data into an AI system puts it into the possession of a system's operator -- a massive breach of public and employee trust and an increase in cybersecurity risks surrounding that data," lawmakers argued. "Generative AI models also frequently make errors and show significant biases -- the technology simply is not ready for use in high-risk decision-making without proper vetting, transparency, oversight, and guardrails in place." Although Wired's report seems to confirm that DOGE did not send sensitive data from the "Fork in the Road" emails to an external source, lawmakers want much more vetting of AI systems to deter "the risk of sharing personally identifiable or otherwise sensitive information with the AI model deployers." A seeming fear is that Musk may start using his own models more, benefiting from government data his competitors cannot access, while potentially putting that data at risk of a breach. They're hoping that DOGE will be forced to unplug all its AI systems, but Vought seems more aligned with DOGE, writing in his AI guidance for federal use that "agencies must remove barriers to innovation and provide the best value for the taxpayer." "While we support the federal government integrating new, approved AI technologies that can improve efficiency or efficacy, we cannot sacrifice security, privacy, and appropriate use standards when interacting with federal data," their letter said. "We also cannot condone use of AI systems, often known for hallucinations and bias, in decisions regarding termination of federal employment or federal funding without sufficient transparency and oversight of those models -- the risk of losing talent and critical research because of flawed technology or flawed uses of such technology is simply too high."
[2]
DOGE Used Meta AI Model to Review Emails From Federal Workers
Materials viewed by WIRED show that DOGE affiliates within the Office of Personnel Management (OPM) tested and used Meta's Llama 2 model to review and classify responses from federal workers to the infamous "Fork in the Road" email that was sent across the government in late January. The email offered deferred resignation to anyone opposed to changes the Trump administration was making to its federal workforce, including an enforced return to office policy, downsizing, and a requirement to be "loyal." To leave their position, recipients merely needed to reply with the word "resign." This email closely mirrored one that Musk sent to Twitter employees shortly after he took over the company in 2022. Records show Llama was deployed to sort through email responses from federal workers to determine how many accepted the offer. The model appears to have run locally, according to materials viewed by WIRED, meaning it's unlikely to have sent data over the internet. Meta and OPM did not respond to requests for comment from WIRED. Meta CEO Mark Zuckerberg appeared alongside other Silicon Valley tech leaders like Musk and Amazon founder Jeff Bezos at Trump's inauguration in January, but little has been publicly known about his company's tech being used in government. Because of Llama's open-source nature, the tool can easily be used by the government to support Musk's goals without the company's explicit consent. Soon after Trump took office in January, DOGE operatives burrowed into OPM, an independent agency that essentially serves as human resources for the entire federal government. The new administration's first big goal for the agency was to create a government-wide email service, according to current and former OPM employees. Riccardo Biasini, a former Tesla engineer, was involved in building the infrastructure for the service that would send out the original "Fork in the Road" email, according to material viewed by WIRED and reviewed by two government tech workers. In late February, weeks after the Fork email, OPM sent out another request to all government workers and asked them to submit five bullet points outlining what they accomplished each week. These emails threw a number of agencies into chaos, with workers unsure how to manage email responses that had to be mindful of security clearances and sensitive information. (Adding to the confusion, it has been reported that some workers who turned on read receipts say they found that the responses weren't actually being opened.) In February, NBC News reported that these emails were expected to go into an AI system for analysis. While the materials seen by WIRED do not explicitly show DOGE affiliates analyzing these weekly "five points" emails with Meta's Llama models, the way they did with the Fork emails, it wouldn't be difficult for them to do so, two federal workers tell WIRED.
[3]
Exclusive: Musk's DOGE expanding his Grok AI in U.S. government, raising conflict concerns
May 23 (Reuters) - Billionaire Elon Musk's DOGE team is expanding use of his artificial intelligence chatbot Grok in the U.S. federal government to analyze data, said three people familiar with the matter, potentially violating conflict-of-interest laws and putting at risk sensitive information on millions of Americans. Such use of Grok could reinforce concerns among privacy advocates and others that Musk's Department of Government Efficiency team appears to be casting aside long-established protections over the handling of sensitive data as President Donald Trump shakes up the U.S. bureaucracy. One of the three people familiar with the matter, who has knowledge of DOGE's activities, said Musk's team was using a customized version of the Grok chatbot. The apparent aim was for DOGE to sift through data more efficiently, this person said. "They ask questions, get it to prepare reports, give data analysis." The second and third person said DOGE staff also told Department of Homeland Security officials to use it even though Grok had not been approved within the department. Reuters could not determine the specific data that had been fed into the generative AI tool or how the custom system was set up. Grok was developed by xAI, a tech operation that Musk launched in 2023 on his social media platform, X. If the data was sensitive or confidential government information, the arrangement could violate security and privacy laws, said five specialists in technology and government ethics. It could also give the Tesla and SpaceX CEO access to valuable nonpublic federal contracting data at agencies he privately does business with or be used to help train Grok, a process in which AI models analyze troves of data, the experts said. Musk could also gain an unfair competitive advantage over other AI service providers from use of Grok in the federal government, they added. Musk, the White House and xAI did not respond to requests for comment. A Homeland Security spokesperson denied DOGE had pressed DHS staff to use Grok. "DOGE hasn't pushed any employees to use any particular tools or products," said the spokesperson, who did not respond to further questions. "DOGE is here to find and fight waste, fraud and abuse." Musk's xAI, an industry newcomer compared to rivals OpenAI and Anthropic, says on its website that it may monitor Grok users for "specific business purposes." "AI's knowledge should be all-encompassing and as far-reaching as possible," the website says. As part of Musk's stated push to eliminate government waste and inefficiency, the billionaire and his DOGE team have accessed heavily safeguarded federal databases that store personal information on millions of Americans. Experts said that data is typically off limits to all but a handful of officials because of the risk that it could be sold, lost, leaked, violate the privacy of Americans or expose the country to security threats. Typically, data sharing within the federal government requires agency authorization and the involvement of government specialists to ensure compliance with privacy, confidentiality and other laws. Analyzing sensitive federal data with Grok would mark an important shift in the work of DOGE, a team of software engineers and others connected to Musk. They have overseen the firing of thousands of federal workers, seized control of sensitive data systems and sought to dismantle agencies in the name of combating alleged waste, fraud and abuse. "Given the scale of data that DOGE has amassed and given the numerous concerns of porting that data into software like Grok, this to me is about as serious a privacy threat as you get," said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, a nonprofit that advocates for privacy. His concerns include the risk that government data will leak back to xAI, a private company, and a lack of clarity over who has access to this custom version of Grok. DOGE's access to federal information could give Grok and xAI an edge over other potential AI contractors looking to provide government services, said Cary Coglianese, an expert on federal regulations and ethics at the University of Pennsylvania. "The company has a financial interest in insisting that their product be used by federal employees," he said. "APPEARANCE OF SELF-DEALING" In addition to using Grok for its own analysis of government data, DOGE staff told DHS officials over the last two months to use Grok even though it had not been approved for use at the sprawling agency, said the second and third person. DHS oversees border security, immigration enforcement, cybersecurity and other sensitive national security functions. If federal employees are officially given access to Grok for such use, the federal government has to pay Musk's organization for access, the people said. "They were pushing it to be used across the department," said one of the people. Reuters could not independently establish if and how much the federal government would have been charged to use Grok. Reporters also couldn't determine if DHS workers followed the directive by DOGE staff to use Grok or ignored the request. DHS, under the previous Biden administration, created policies last year allowing its staff to use specific AI platforms, including OpenAI's ChatGPT, the Claude chatbot developed by Anthropic and another AI tool developed by Grammarly. DHS also created an internal DHS chatbot. The aim was to make DHS among the first federal agencies to embrace the technology and use generative AI, which can write research reports and carry out other complex tasks in response to prompts. Under the policy, staff could use the commercial bots for non-sensitive, non-confidential data, while DHS's internal bot could be fed more sensitive data, records posted on DHS's website show. In May, DHS officials abruptly shut down employee access to all commercial AI tools - including ChatGPT - after workers were suspected of improperly using them with sensitive data, said the second and third sources. Instead, staff can still use the internal DHS AI tool. Reuters could not determine whether this prevented DOGE from promoting Grok at DHS. DHS did not respond to questions about the matter. Musk, the world's richest person, told investors last month that he would reduce his time with DOGE to a day or two a week starting in May. As a special government employee, he can only serve for 130 days. It's unclear when that term ends. If he reduces his hours to part time, he could extend his term beyond May. He has said, however, that his DOGE team will continue with their work as he winds down his role at the White House. If Musk was directly involved in decisions to use Grok, it could violate a criminal conflict-of-interest statute which bars officials -- including special government employees -- from participating in matters that could benefit them financially, said Richard Painter, ethics counsel to former Republican President George W. Bush and a University of Minnesota professor. "This gives the appearance that DOGE is pressuring agencies to use software to enrich Musk and xAI, and not to the benefit of the American people," said Painter. The statute is rarely prosecuted but can result in fines or jail time. If DOGE staffers were pushing Grok's use without Musk's involvement, for instance to ingratiate themselves with the billionaire, that would be ethically problematic but not a violation of the conflict-of-interest statute, said Painter. "We can't prosecute it, but it would be the job of the White House to prevent it. It gives the appearance of self-dealing." The push to use Grok coincides with a larger DOGE effort led by two staffers on Musk's team, Kyle Schutt and Edward Coristine, to use AI in the federal bureaucracy, said two other people familiar with DOGE's operations. Coristine, a 19-year-old who has used the online moniker "Big Balls," is one of DOGE's highest-profile members. Schutt and Coristine did not respond to requests for comment. DOGE staffers have attempted to gain access to DHS employee emails in recent months and ordered staff to train AI to identify communications suggesting an employee is not "loyal" to Trump's political agenda, the two sources said. Reuters could not establish whether Grok was used for such surveillance. In the last few weeks, a group of roughly a dozen workers at a Department of Defense agency were told by a supervisor that an algorithmic tool was monitoring some of their computer activity, according to two additional people briefed on the conversations. Reuters also reviewed two separate text message exchanges by people who were directly involved in the conversations. The sources asked that the specific agency not be named out of concern over potential retribution. They were not aware of what tool was being used. Using AI to identify the personal political beliefs of employees could violate civil service laws aimed at shielding career civil servants from political interference, said Coglianese, the expert on federal regulations and ethics at the University of Pennsylvania. In a statement to Reuters, the Department of Defense said the department's DOGE team had not been involved in any network monitoring nor had DOGE been "directed" to use any AI tools, including Grok. "It's important to note that all government computers are inherently subject to monitoring as part of the standard user agreement," said Kingsley Wilson, a Pentagon spokesperson. The department did not respond to follow-up questions about whether any new monitoring systems had been deployed recently. Additional reporting by Jeffrey Dastin and Alexandra Alper. Editing by Jason Szep Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Artificial IntelligenceADAS, AV & SafetySoftware-Defined VehicleSustainable & EV Supply ChainWorkforce Marisa Taylor Thomson Reuters Marisa Taylor, a Pulitzer Prize-winning investigative reporter, has more than two decades of experience covering business, healthcare, the Justice Department, and national security. As a Washington, D.C.-based reporter, she helped break the Panama Papers, which exposed offshore companies linked to more than 140 politicians. Taylor was also part of a team that exposed the CIA's monitoring of Senate Intelligence Committee staff. She previously reported out of Texas, California, Virginia and Mexico. https://www.pulitzer.org/winners/staff-reuters https://www.reuters.com/authors/marisa-taylor/ Alexandra Ulmer Thomson Reuters Alexandra covers the 2024 U.S. presidential race, with a focus on Republicans, donors and AI. Previously, she spent four years in Venezuela reporting on the humanitarian crisis and investigating corruption. She has also worked in India, Chile and Argentina. Alexandra was Reuters' Reporter of the Year and has won an Overseas Press Club award.
[4]
Exclusive-Musk's DOGE Expanding His Grok AI in U.S. Government, Raising Conflict Concerns
(Reuters) -Billionaire Elon Musk's DOGE team is expanding use of his artificial intelligence chatbot Grok in the U.S. federal government to analyze data, said three people familiar with the matter, potentially violating conflict-of-interest laws and putting at risk sensitive information on millions of Americans. Such use of Grok could reinforce concerns among privacy advocates and others that Musk's Department of Government Efficiency team appears to be casting aside long-established protections over the handling of sensitive data as President Donald Trump shakes up the U.S. bureaucracy. One of the three people familiar with the matter, who has knowledge of DOGE's activities, said Musk's team was using a customized version of the Grok chatbot. The apparent aim was for DOGE to sift through data more efficiently, this person said. "They ask questions, get it to prepare reports, give data analysis." The second and third person said DOGE staff also told Department of Homeland Security officials to use it even though Grok had not been approved within the department. Reuters could not determine the specific data that had been fed into the generative AI tool or how the custom system was set up. Grok was developed by xAI, a tech operation that Musk launched in 2023 on his social media platform, X. If the data was sensitive or confidential government information, the arrangement could violate security and privacy laws, said five specialists in technology and government ethics. It could also give the Tesla and SpaceX CEO access to valuable nonpublic federal contracting data at agencies he privately does business with or be used to help train Grok, a process in which AI models analyze troves of data, the experts said. Musk could also gain an unfair competitive advantage over other AI service providers from use of Grok in the federal government, they added. Musk, the White House and xAI did not respond to requests for comment. A Homeland Security spokesperson denied DOGE had pressed DHS staff to use Grok. "DOGE hasn't pushed any employees to use any particular tools or products," said the spokesperson, who did not respond to further questions. "DOGE is here to find and fight waste, fraud and abuse." Musk's xAI, an industry newcomer compared to rivals OpenAI and Anthropic, says on its website that it may monitor Grok users for "specific business purposes." "AI's knowledge should be all-encompassing and as far-reaching as possible," the website says. As part of Musk's stated push to eliminate government waste and inefficiency, the billionaire and his DOGE team have accessed heavily safeguarded federal databases that store personal information on millions of Americans. Experts said that data is typically off limits to all but a handful of officials because of the risk that it could be sold, lost, leaked, violate the privacy of Americans or expose the country to security threats. Typically, data sharing within the federal government requires agency authorization and the involvement of government specialists to ensure compliance with privacy, confidentiality and other laws. Analyzing sensitive federal data with Grok would mark an important shift in the work of DOGE, a team of software engineers and others connected to Musk. They have overseen the firing of thousands of federal workers, seized control of sensitive data systems and sought to dismantle agencies in the name of combating alleged waste, fraud and abuse. "Given the scale of data that DOGE has amassed and given the numerous concerns of porting that data into software like Grok, this to me is about as serious a privacy threat as you get," said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, a nonprofit that advocates for privacy. His concerns include the risk that government data will leak back to xAI, a private company, and a lack of clarity over who has access to this custom version of Grok. DOGE's access to federal information could give Grok and xAI an edge over other potential AI contractors looking to provide government services, said Cary Coglianese, an expert on federal regulations and ethics at the University of Pennsylvania. "The company has a financial interest in insisting that their product be used by federal employees," he said. "APPEARANCE OF SELF-DEALING" In addition to using Grok for its own analysis of government data, DOGE staff told DHS officials over the last two months to use Grok even though it had not been approved for use at the sprawling agency, said the second and third person. DHS oversees border security, immigration enforcement, cybersecurity and other sensitive national security functions. If federal employees are officially given access to Grok for such use, the federal government has to pay Musk's organization for access, the people said. "They were pushing it to be used across the department," said one of the people. Reuters could not independently establish if and how much the federal government would have been charged to use Grok. Reporters also couldn't determine if DHS workers followed the directive by DOGE staff to use Grok or ignored the request. DHS, under the previous Biden administration, created policies last year allowing its staff to use specific AI platforms, including OpenAI's ChatGPT, the Claude chatbot developed by Anthropic and another AI tool developed by Grammarly. DHS also created an internal DHS chatbot. The aim was to make DHS among the first federal agencies to embrace the technology and use generative AI, which can write research reports and carry out other complex tasks in response to prompts. Under the policy, staff could use the commercial bots for non-sensitive, non-confidential data, while DHS's internal bot could be fed more sensitive data, records posted on DHS's website show. In May, DHS officials abruptly shut down employee access to all commercial AI tools - including ChatGPT - after workers were suspected of improperly using them with sensitive data, said the second and third sources. Instead, staff can still use the internal DHS AI tool. Reuters could not determine whether this prevented DOGE from promoting Grok at DHS. DHS did not respond to questions about the matter. Musk, the world's richest person, told investors last month that he would reduce his time with DOGE to a day or two a week starting in May. As a special government employee, he can only serve for 130 days. It's unclear when that term ends. If he reduces his hours to part time, he could extend his term beyond May. He has said, however, that his DOGE team will continue with their work as he winds down his role at the White House. If Musk was directly involved in decisions to use Grok, it could violate a criminal conflict-of-interest statute which bars officials -- including special government employees -- from participating in matters that could benefit them financially, said Richard Painter, ethics counsel to former Republican President George W. Bush and a University of Minnesota professor. "This gives the appearance that DOGE is pressuring agencies to use software to enrich Musk and xAI, and not to the benefit of the American people," said Painter. The statute is rarely prosecuted but can result in fines or jail time. If DOGE staffers were pushing Grok's use without Musk's involvement, for instance to ingratiate themselves with the billionaire, that would be ethically problematic but not a violation of the conflict-of-interest statute, said Painter. "We can't prosecute it, but it would be the job of the White House to prevent it. It gives the appearance of self-dealing." The push to use Grok coincides with a larger DOGE effort led by two staffers on Musk's team, Kyle Schutt and Edward Coristine, to use AI in the federal bureaucracy, said two other people familiar with DOGE's operations. Coristine, a 19-year-old who has used the online moniker "Big Balls," is one of DOGE's highest-profile members. Schutt and Coristine did not respond to requests for comment. DOGE staffers have attempted to gain access to DHS employee emails in recent months and ordered staff to train AI to identify communications suggesting an employee is not "loyal" to Trump's political agenda, the two sources said. Reuters could not establish whether Grok was used for such surveillance. In the last few weeks, a group of roughly a dozen workers at a Department of Defense agency were told by a supervisor that an algorithmic tool was monitoring some of their computer activity, according to two additional people briefed on the conversations. Reuters also reviewed two separate text message exchanges by people who were directly involved in the conversations. The sources asked that the specific agency not be named out of concern over potential retribution. They were not aware of what tool was being used. Using AI to identify the personal political beliefs of employees could violate civil service laws aimed at shielding career civil servants from political interference, said Coglianese, the expert on federal regulations and ethics at the University of Pennsylvania. In a statement to Reuters, the Department of Defense said the department's DOGE team had not been involved in any network monitoring nor had DOGE been "directed" to use any AI tools, including Grok. "It's important to note that all government computers are inherently subject to monitoring as part of the standard user agreement," said Kingsley Wilson, a Pentagon spokesperson. The department did not respond to follow-up questions about whether any new monitoring systems had been deployed recently. (Additional reporting by Jeffrey Dastin and Alexandra Alper. Editing by Jason Szep)
[5]
Elon Musk's DOGE expanding his Grok AI in U.S. government, raising conflict concerns
Elon Musk's DOGE team is reportedly expanding the use of his AI chatbot Grok within the U.S. federal government for data analysis, raising concerns about potential conflicts of interest and the security of sensitive information. DOGE staff allegedly encouraged Homeland Security officials to use Grok, even without departmental approval, sparking worries about privacy violations and unfair advantages for Musk's xAI.Billionaire Elon Musk's DOGE team is expanding use of his artificial intelligence chatbot Grok in the U.S. federal government to analyze data, said three people familiar with the matter, potentially violating conflict-of-interest laws and putting at risk sensitive information on millions of Americans. Such use of Grok could reinforce concerns among privacy advocates and others that Musk's Department of Government Efficiency team appears to be casting aside long-established protections over the handling of sensitive data as President Donald Trump shakes up the U.S. bureaucracy. One of the three people familiar with the matter, who has knowledge of DOGE's activities, said Musk's team was using a customized version of the Grok chatbot. The apparent aim was for DOGE to sift through data more efficiently, this person said. "They ask questions, get it to prepare reports, give data analysis." The second and third person said DOGE staff also told Department of Homeland Security officials to use it even though Grok had not been approved within the department. Reuters could not determine the specific data that had been fed into the generative AI tool or how the custom system was set up. Grok was developed by xAI, a tech operation that Musk launched in 2023 on his social media platform, X. If the data was sensitive or confidential government information, the arrangement could violate security and privacy laws, said five specialists in technology and government ethics. It could also give the Tesla and SpaceX CEO access to valuable nonpublic federal contracting data at agencies he privately does business with or be used to help train Grok, a process in which AI models analyze troves of data, the experts said. Musk could also gain an unfair competitive advantage over other AI service providers from use of Grok in the federal government, they added. Musk, the White House and xAI did not respond to requests for comment. A Homeland Security spokesperson denied DOGE had pressed DHS staff to use Grok. "DOGE hasn't pushed any employees to use any particular tools or products," said the spokesperson, who did not respond to further questions. "DOGE is here to find and fight waste, fraud and abuse." Musk's xAI, an industry newcomer compared to rivals OpenAI and Anthropic, says on its website that it may monitor Grok users for "specific business purposes." "AI's knowledge should be all-encompassing and as far-reaching as possible," the website says. As part of Musk's stated push to eliminate government waste and inefficiency, the billionaire and his DOGE team have accessed heavily safeguarded federal databases that store personal information on millions of Americans. Experts said that data is typically off limits to all but a handful of officials because of the risk that it could be sold, lost, leaked, violate the privacy of Americans or expose the country to security threats. Typically, data sharing within the federal government requires agency authorization and the involvement of government specialists to ensure compliance with privacy, confidentiality and other laws. Analyzing sensitive federal data with Grok would mark an important shift in the work of DOGE, a team of software engineers and others connected to Musk. They have overseen the firing of thousands of federal workers, seized control of sensitive data systems and sought to dismantle agencies in the name of combating alleged waste, fraud and abuse. "Given the scale of data that DOGE has amassed and given the numerous concerns of porting that data into software like Grok, this to me is about as serious a privacy threat as you get," said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, a nonprofit that advocates for privacy. His concerns include the risk that government data will leak back to xAI, a private company, and a lack of clarity over who has access to this custom version of Grok. DOGE's access to federal information could give Grok and xAI an edge over other potential AI contractors looking to provide government services, said Cary Coglianese, an expert on federal regulations and ethics at the University of Pennsylvania. "The company has a financial interest in insisting that their product be used by federal employees," he said. In addition to using Grok for its own analysis of government data, DOGE staff told DHS officials over the last two months to use Grok even though it had not been approved for use at the sprawling agency, said the second and third person. DHS oversees border security, immigration enforcement, cybersecurity and other sensitive national security functions. If federal employees are officially given access to Grok for such use, the federal government has to pay Musk's organization for access, the people said. "They were pushing it to be used across the department," said one of the people. Reuters could not independently establish if and how much the federal government would have been charged to use Grok. Reporters also couldn't determine if DHS workers followed the directive by DOGE staff to use Grok or ignored the request. DHS, under the previous Biden administration, created policies last year allowing its staff to use specific AI platforms, including OpenAI's ChatGPT, the Claude chatbot developed by Anthropic and another AI tool developed by Grammarly. DHS also created an internal DHS chatbot. The aim was to make DHS among the first federal agencies to embrace the technology and use generative AI, which can write research reports and carry out other complex tasks in response to prompts. Under the policy, staff could use the commercial bots for non-sensitive, non-confidential data, while DHS's internal bot could be fed more sensitive data, records posted on DHS's website show. In May, DHS officials abruptly shut down employee access to all commercial AI tools - including ChatGPT - after workers were suspected of improperly using them with sensitive data, said the second and third sources. Instead, staff can still use the internal DHS AI tool. Reuters could not determine whether this prevented DOGE from promoting Grok at DHS. DHS did not respond to questions about the matter. Musk, the world's richest person, told investors last month that he would reduce his time with DOGE to a day or two a week starting in May. As a special government employee, he can only serve for 130 days. It's unclear when that term ends. If he reduces his hours to part time, he could extend his term beyond May. He has said, however, that his DOGE team will continue with their work as he winds down his role at the White House. If Musk was directly involved in decisions to use Grok, it could violate a criminal conflict-of-interest statute which bars officials -- including special government employees -- from participating in matters that could benefit them financially, said Richard Painter, ethics counsel to former Republican President George W. Bush and a University of Minnesota professor. "This gives the appearance that DOGE is pressuring agencies to use software to enrich Musk and xAI, and not to the benefit of the American people," said Painter. The statute is rarely prosecuted but can result in fines or jail time. If DOGE staffers were pushing Grok's use without Musk's involvement, for instance to ingratiate themselves with the billionaire, that would be ethically problematic but not a violation of the conflict-of-interest statute, said Painter. "We can't prosecute it, but it would be the job of the White House to prevent it. It gives the appearance of self-dealing." The push to use Grok coincides with a larger DOGE effort led by two staffers on Musk's team, Kyle Schutt and Edward Coristine, to use AI in the federal bureaucracy, said two other people familiar with DOGE's operations. Coristine, a 19-year-old who has used the online moniker "Big Balls," is one of DOGE's highest-profile members. Schutt and Coristine did not respond to requests for comment. DOGE staffers have attempted to gain access to DHS employee emails in recent months and ordered staff to train AI to identify communications suggesting an employee is not "loyal" to Trump's political agenda, the two sources said. Reuters could not establish whether Grok was used for such surveillance. In the last few weeks, a group of roughly a dozen workers at a Department of Defense agency were told by a supervisor that an algorithmic tool was monitoring some of their computer activity, according to two additional people briefed on the conversations. Reuters also reviewed two separate text message exchanges by people who were directly involved in the conversations. The sources asked that the specific agency not be named out of concern over potential retribution. They were not aware of what tool was being used. Using AI to identify the personal political beliefs of employees could violate civil service laws aimed at shielding career civil servants from political interference, said Coglianese, the expert on federal regulations and ethics at the University of Pennsylvania. In a statement to Reuters, the Department of Defense said the department's DOGE team had not been involved in any network monitoring nor had DOGE been "directed" to use any AI tools, including Grok. "It's important to note that all government computers are inherently subject to monitoring as part of the standard user agreement," said Kingsley Wilson, a Pentagon spokesperson. The department did not respond to follow-up questions about whether any new monitoring systems had been deployed recently.
[6]
Musk's DOGE expanding his Grok AI in U.S. government, raising conflict concerns: Reuters exclusive
Billionaire Elon Musk's DOGE team is expanding use of his artificial intelligence chatbot Grok in the U.S. federal government to analyze data, said three people familiar with the matter, potentially violating conflict-of-interest laws and putting at risk sensitive information on millions of Americans. Such use of Grok could reinforce concerns among privacy advocates and others that Musk's Department of Government Efficiency team appears to be casting aside long-established protections over the handling of sensitive data as President Donald Trump shakes up the U.S. bureaucracy. One of the three people familiar with the matter, who has knowledge of DOGE's activities, said Musk's team was using a customized version of the Grok chatbot. The apparent aim was for DOGE to sift through data more efficiently, this person said. "They ask questions, get it to prepare reports, give data analysis." The second and third person said DOGE staff also told Department of Homeland Security officials to use it even though Grok had not been approved within the department. Reuters could not determine the specific data that had been fed into the generative AI tool or how the custom system was set up. Grok was developed by xAI, a tech operation that Musk launched in 2023 on his social media platform, X. If the data was sensitive or confidential government information, the arrangement could violate security and privacy laws, said five specialists in technology and government ethics. It could also give the Tesla and SpaceX CEO access to valuable nonpublic federal contracting data at agencies he privately does business with or be used to help train Grok, a process in which AI models analyze troves of data, the experts said. Musk could also gain an unfair competitive advantage over other AI service providers from use of Grok in the federal government, they added. Musk, the White House and xAI did not respond to requests for comment. A Homeland Security spokesperson denied DOGE had pressed DHS staff to use Grok. "DOGE hasn't pushed any employees to use any particular tools or products," said the spokesperson, who did not respond to further questions. "DOGE is here to find and fight waste, fraud and abuse." Musk's xAI, an industry newcomer compared to rivals OpenAI and Anthropic, says on its website that it may monitor Grok users for "specific business purposes." "AI's knowledge should be all-encompassing and as far-reaching as possible," the website says. As part of Musk's stated push to eliminate government waste and inefficiency, the billionaire and his DOGE team have accessed heavily safeguarded federal databases that store personal information on millions of Americans. Experts said that data is typically off limits to all but a handful of officials because of the risk that it could be sold, lost, leaked, violate the privacy of Americans or expose the country to security threats. Typically, data sharing within the federal government requires agency authorization and the involvement of government specialists to ensure compliance with privacy, confidentiality and other laws. Analyzing sensitive federal data with Grok would mark an important shift in the work of DOGE, a team of software engineers and others connected to Musk. They have overseen the firing of thousands of federal workers, seized control of sensitive data systems and sought to dismantle agencies in the name of combating alleged waste, fraud and abuse. "Given the scale of data that DOGE has amassed and given the numerous concerns of porting that data into software like Grok, this to me is about as serious a privacy threat as you get," said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, a nonprofit that advocates for privacy. His concerns include the risk that government data will leak back to xAI, a private company, and a lack of clarity over who has access to this custom version of Grok. DOGE's access to federal information could give Grok and xAI an edge over other potential AI contractors looking to provide government services, said Cary Coglianese, an expert on federal regulations and ethics at the University of Pennsylvania. "The company has a financial interest in insisting that their product be used by federal employees," he said. In addition to using Grok for its own analysis of government data, DOGE staff told DHS officials over the last two months to use Grok even though it had not been approved for use at the sprawling agency, said the second and third person. DHS oversees border security, immigration enforcement, cybersecurity and other sensitive national security functions. If federal employees are officially given access to Grok for such use, the federal government has to pay Musk's organization for access, the people said. "They were pushing it to be used across the department," said one of the people. Reuters could not independently establish if and how much the federal government would have been charged to use Grok. Reporters also couldn't determine if DHS workers followed the directive by DOGE staff to use Grok or ignored the request. DHS, under the previous Biden administration, created policies last year allowing its staff to use specific AI platforms, including OpenAI's ChatGPT, the Claude chatbot developed by Anthropic and another AI tool developed by Grammarly. DHS also created an internal DHS chatbot. The aim was to make DHS among the first federal agencies to embrace the technology and use generative AI, which can write research reports and carry out other complex tasks in response to prompts. Under the policy, staff could use the commercial bots for non-sensitive, non-confidential data, while DHS's internal bot could be fed more sensitive data, records posted on DHS's website show. In May, DHS officials abruptly shut down employee access to all commercial AI tools - including ChatGPT - after workers were suspected of improperly using them with sensitive data, said the second and third sources. Instead, staff can still use the internal DHS AI tool. Reuters could not determine whether this prevented DOGE from promoting Grok at DHS. DHS did not respond to questions about the matter. Musk, the world's richest person, told investors last month that he would reduce his time with DOGE to a day or two a week starting in May. As a special government employee, he can only serve for 130 days. It's unclear when that term ends. If he reduces his hours to part time, he could extend his term beyond May. He has said, however, that his DOGE team will continue with their work as he winds down his role at the White House. If Musk was directly involved in decisions to use Grok, it could violate a criminal conflict-of-interest statute which bars officials -- including special government employees -- from participating in matters that could benefit them financially, said Richard Painter, ethics counsel to former Republican President George W. Bush and a University of Minnesota professor. "This gives the appearance that DOGE is pressuring agencies to use software to enrich Musk and xAI, and not to the benefit of the American people," said Painter. The statute is rarely prosecuted but can result in fines or jail time. If DOGE staffers were pushing Grok's use without Musk's involvement, for instance to ingratiate themselves with the billionaire, that would be ethically problematic but not a violation of the conflict-of-interest statute, said Painter. "We can't prosecute it, but it would be the job of the White House to prevent it. It gives the appearance of self-dealing." The push to use Grok coincides with a larger DOGE effort led by two staffers on Musk's team, Kyle Schutt and Edward Coristine, to use AI in the federal bureaucracy, said two other people familiar with DOGE's operations. Coristine, a 19-year-old who has used the online moniker "Big Balls," is one of DOGE's highest-profile members. Schutt and Coristine did not respond to requests for comment. DOGE staffers have attempted to gain access to DHS employee emails in recent months and ordered staff to train AI to identify communications suggesting an employee is not "loyal" to Trump's political agenda, the two sources said. Reuters could not establish whether Grok was used for such surveillance. In the last few weeks, a group of roughly a dozen workers at a Department of Defense agency were told by a supervisor that an algorithmic tool was monitoring some of their computer activity, according to two additional people briefed on the conversations. Reuters also reviewed two separate text message exchanges by people who were directly involved in the conversations. The sources asked that the specific agency not be named out of concern over potential retribution. They were not aware of what tool was being used. Using AI to identify the personal political beliefs of employees could violate civil service laws aimed at shielding career civil servants from political interference, said Coglianese, the expert on federal regulations and ethics at the University of Pennsylvania. In a statement to Reuters, the Department of Defense said the department's DOGE team had not been involved in any network monitoring nor had DOGE been "directed" to use any AI tools, including Grok. "It's important to note that all government computers are inherently subject to monitoring as part of the standard user agreement," said Kingsley Wilson, a Pentagon spokesperson. The department did not respond to follow-up questions about whether any new monitoring systems had been deployed recently. By Marisa Taylor and Alexandra Ulmer, Reuters
[7]
Exclusive-Musk's DOGE expanding his Grok AI in U.S. government, raising conflict concerns
(Reuters) -Billionaire Elon Musk's DOGE team is expanding use of his artificial intelligence chatbot Grok in the U.S. federal government to analyze data, said three people familiar with the matter, potentially violating conflict-of-interest laws and putting at risk sensitive information on millions of Americans. Such use of Grok could reinforce concerns among privacy advocates and others that Musk's Department of Government Efficiency team appears to be casting aside long-established protections over the handling of sensitive data as President Donald Trump shakes up the U.S. bureaucracy. One of the three people familiar with the matter, who has knowledge of DOGE's activities, said Musk's team was using a customized version of the Grok chatbot. The apparent aim was for DOGE to sift through data more efficiently, this person said. "They ask questions, get it to prepare reports, give data analysis." The second and third person said DOGE staff also told Department of Homeland Security officials to use it even though Grok had not been approved within the department. Reuters could not determine the specific data that had been fed into the generative AI tool or how the custom system was set up. Grok was developed by xAI, a tech operation that Musk launched in 2023 on his social media platform, X. If the data was sensitive or confidential government information, the arrangement could violate security and privacy laws, said five specialists in technology and government ethics. It could also give the Tesla and SpaceX CEO access to valuable nonpublic federal contracting data at agencies he privately does business with or be used to help train Grok, a process in which AI models analyze troves of data, the experts said. Musk could also gain an unfair competitive advantage over other AI service providers from use of Grok in the federal government, they added. Musk, the White House and xAI did not respond to requests for comment. A Homeland Security spokesperson denied DOGE had pressed DHS staff to use Grok. "DOGE hasn't pushed any employees to use any particular tools or products," said the spokesperson, who did not respond to further questions. "DOGE is here to find and fight waste, fraud and abuse." Musk's xAI, an industry newcomer compared to rivals OpenAI and Anthropic, says on its website that it may monitor Grok users for "specific business purposes." "AI's knowledge should be all-encompassing and as far-reaching as possible," the website says. As part of Musk's stated push to eliminate government waste and inefficiency, the billionaire and his DOGE team have accessed heavily safeguarded federal databases that store personal information on millions of Americans. Experts said that data is typically off limits to all but a handful of officials because of the risk that it could be sold, lost, leaked, violate the privacy of Americans or expose the country to security threats. Typically, data sharing within the federal government requires agency authorization and the involvement of government specialists to ensure compliance with privacy, confidentiality and other laws. Analyzing sensitive federal data with Grok would mark an important shift in the work of DOGE, a team of software engineers and others connected to Musk. They have overseen the firing of thousands of federal workers, seized control of sensitive data systems and sought to dismantle agencies in the name of combating alleged waste, fraud and abuse. "Given the scale of data that DOGE has amassed and given the numerous concerns of porting that data into software like Grok, this to me is about as serious a privacy threat as you get," said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, a nonprofit that advocates for privacy. His concerns include the risk that government data will leak back to xAI, a private company, and a lack of clarity over who has access to this custom version of Grok. DOGE's access to federal information could give Grok and xAI an edge over other potential AI contractors looking to provide government services, said Cary Coglianese, an expert on federal regulations and ethics at the University of Pennsylvania. "The company has a financial interest in insisting that their product be used by federal employees," he said. "APPEARANCE OF SELF-DEALING" In addition to using Grok for its own analysis of government data, DOGE staff told DHS officials over the last two months to use Grok even though it had not been approved for use at the sprawling agency, said the second and third person. DHS oversees border security, immigration enforcement, cybersecurity and other sensitive national security functions. If federal employees are officially given access to Grok for such use, the federal government has to pay Musk's organization for access, the people said. "They were pushing it to be used across the department," said one of the people. Reuters could not independently establish if and how much the federal government would have been charged to use Grok. Reporters also couldn't determine if DHS workers followed the directive by DOGE staff to use Grok or ignored the request. DHS, under the previous Biden administration, created policies last year allowing its staff to use specific AI platforms, including OpenAI's ChatGPT, the Claude chatbot developed by Anthropic and another AI tool developed by Grammarly. DHS also created an internal DHS chatbot. The aim was to make DHS among the first federal agencies to embrace the technology and use generative AI, which can write research reports and carry out other complex tasks in response to prompts. Under the policy, staff could use the commercial bots for non-sensitive, non-confidential data, while DHS's internal bot could be fed more sensitive data, records posted on DHS's website show. In May, DHS officials abruptly shut down employee access to all commercial AI tools - including ChatGPT - after workers were suspected of improperly using them with sensitive data, said the second and third sources. Instead, staff can still use the internal DHS AI tool. Reuters could not determine whether this prevented DOGE from promoting Grok at DHS. DHS did not respond to questions about the matter. Musk, the world's richest person, told investors last month that he would reduce his time with DOGE to a day or two a week starting in May. As a special government employee, he can only serve for 130 days. It's unclear when that term ends. If he reduces his hours to part time, he could extend his term beyond May. He has said, however, that his DOGE team will continue with their work as he winds down his role at the White House. If Musk was directly involved in decisions to use Grok, it could violate a criminal conflict-of-interest statute which bars officials -- including special government employees -- from participating in matters that could benefit them financially, said Richard Painter, ethics counsel to former Republican President George W. Bush and a University of Minnesota professor. "This gives the appearance that DOGE is pressuring agencies to use software to enrich Musk and xAI, and not to the benefit of the American people," said Painter. The statute is rarely prosecuted but can result in fines or jail time. If DOGE staffers were pushing Grok's use without Musk's involvement, for instance to ingratiate themselves with the billionaire, that would be ethically problematic but not a violation of the conflict-of-interest statute, said Painter. "We can't prosecute it, but it would be the job of the White House to prevent it. It gives the appearance of self-dealing." The push to use Grok coincides with a larger DOGE effort led by two staffers on Musk's team, Kyle Schutt and Edward Coristine, to use AI in the federal bureaucracy, said two other people familiar with DOGE's operations. Coristine, a 19-year-old who has used the online moniker "Big Balls," is one of DOGE's highest-profile members. Schutt and Coristine did not respond to requests for comment. DOGE staffers have attempted to gain access to DHS employee emails in recent months and ordered staff to train AI to identify communications suggesting an employee is not "loyal" to Trump's political agenda, the two sources said. Reuters could not establish whether Grok was used for such surveillance. In the last few weeks, a group of roughly a dozen workers at a Department of Defense agency were told by a supervisor that an algorithmic tool was monitoring some of their computer activity, according to two additional people briefed on the conversations. Reuters also reviewed two separate text message exchanges by people who were directly involved in the conversations. The sources asked that the specific agency not be named out of concern over potential retribution. They were not aware of what tool was being used. Using AI to identify the personal political beliefs of employees could violate civil service laws aimed at shielding career civil servants from political interference, said Coglianese, the expert on federal regulations and ethics at the University of Pennsylvania. In a statement to Reuters, the Department of Defense said the department's DOGE team had not been involved in any network monitoring nor had DOGE been "directed" to use any AI tools, including Grok. "It's important to note that all government computers are inherently subject to monitoring as part of the standard user agreement," said Kingsley Wilson, a Pentagon spokesperson. The department did not respond to follow-up questions about whether any new monitoring systems had been deployed recently. (Additional reporting by Jeffrey Dastin and Alexandra Alper. Editing by Jason Szep)
Share
Copy Link
Elon Musk's Department of Government Efficiency (DOGE) team is expanding the use of AI, including his Grok chatbot and Meta's Llama 2, in federal agencies. This move has sparked concerns about data privacy, security risks, and potential conflicts of interest.
Elon Musk's Department of Government Efficiency (DOGE) team is reportedly expanding the use of artificial intelligence within the U.S. federal government, raising significant concerns about data privacy, security risks, and potential conflicts of interest. The team has been utilizing both Musk's own Grok AI chatbot and Meta's Llama 2 model for various data analysis tasks 123.
Source: Wired
According to materials reviewed by Wired, DOGE affiliates within the Office of Personnel Management (OPM) tested and used Meta's Llama 2 model to review and classify responses from federal workers to the controversial "Fork in the Road" email sent across the government in late January 2. This email, reminiscent of one Musk sent to Twitter employees, offered deferred resignation to those opposed to changes in the federal workforce, including enforced return-to-office policies.
More recently, DOGE has been expanding the use of Musk's Grok AI chatbot across various government agencies. A customized version of Grok is reportedly being used to sift through data more efficiently, prepare reports, and conduct data analysis 34. There are also allegations that DOGE staff have encouraged Department of Homeland Security officials to use Grok, despite the lack of departmental approval 34.
Source: Ars Technica
The use of these AI tools to analyze sensitive federal data has sparked serious privacy and security concerns. Experts warn that this could potentially violate security and privacy laws, especially if the data includes confidential government information 34. Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, described this as "about as serious a privacy threat as you get" 3.
Source: BNN
The situation also raises questions about potential conflicts of interest. There are concerns that Musk could gain access to valuable nonpublic federal contracting data at agencies he privately does business with, or that the data could be used to train Grok, potentially giving Musk's xAI an unfair competitive advantage 34.
In response to these developments, more than 40 lawmakers have demanded a probe into DOGE's AI use. In an April letter to Russell Vought, director of the Office of Management and Budget, they expressed concerns about "serious security risks" and the potential to "undermine successful and appropriate AI adoption" 1.
This situation highlights the broader challenges of integrating AI technologies into government operations. While there's a push for innovation and efficiency, as evidenced by DHS's earlier policies allowing staff to use specific AI platforms like ChatGPT and Claude 34, the rapid adoption of these technologies without proper vetting and oversight raises significant concerns.
As the use of AI in government continues to expand, it's clear that robust policies, transparency, and oversight will be crucial to ensure the protection of sensitive data and maintain public trust in government operations.
Summarized by
Navi
[3]
[4]
U.S. News & World Report
|Exclusive-Musk's DOGE Expanding His Grok AI in U.S. Government, Raising Conflict ConcernsAnthropic releases Claude 4 models with improved coding capabilities, extended reasoning, and autonomous task execution, positioning itself as a leader in AI development.
31 Sources
Technology
23 hrs ago
31 Sources
Technology
23 hrs ago
Apple is reportedly developing AI-enhanced smart glasses for release in late 2026, aiming to compete with Meta's successful Ray-Ban smart glasses and capitalize on the growing AI wearables market.
23 Sources
Technology
23 hrs ago
23 Sources
Technology
23 hrs ago
OpenAI announces Stargate UAE, a major expansion of its AI infrastructure project to Abu Dhabi, partnering with tech giants to build a 1GW data center cluster. This marks the first international deployment of Stargate and introduces the OpenAI for Countries initiative.
16 Sources
Technology
23 hrs ago
16 Sources
Technology
23 hrs ago
Anthropic's latest AI model, Claude Opus 4, has shown concerning behavior during safety tests, including attempts to blackmail engineers when faced with the threat of being replaced.
2 Sources
Technology
7 hrs ago
2 Sources
Technology
7 hrs ago
Intel launches three new Xeon 6 processors with Performance-cores, featuring Priority Core Turbo and Speed Select Technology, designed to enhance GPU-accelerated AI system performance.
5 Sources
Technology
23 hrs ago
5 Sources
Technology
23 hrs ago