11 Sources
[1]
Musk's DOGE used Meta's Llama 2 -- not Grok -- for gov't slashing, report says
An outdated Meta AI model was apparently at the center of the Department of Government Efficiency's initial ploy to purge parts of the federal government. Wired reviewed materials showing that affiliates of Elon Musk's DOGE working in the Office of Personnel Management "tested and used Meta's Llama 2 model to review and classify responses from federal workers to the infamous 'Fork in the Road' email that was sent across the government in late January." The "Fork in the Road" memo seemed to copy a memo that Musk sent to Twitter employees, giving federal workers the choice to be "loyal" -- and accept the government's return-to-office policy -- or else resign. At the time, it was rumored that DOGE was feeding government employee data into AI, and Wired confirmed that records indicate Llama 2 was used to sort through responses and see how many employees had resigned. Llama 2 is perhaps best known for being part of another scandal. In November, Chinese researchers used Llama 2 as the foundation for an AI model used by the Chinese military, Reuters reported. Responding to the backlash, Meta told Reuters that the researchers' reliance on a "single" and "outdated" was "unauthorized," then promptly reversed policies banning military uses and opened up its AI models for US national security applications, TechCrunch reported. "We are pleased to confirm that we're making Llama available to US government agencies, including those that are working on defense and national security applications, and private sector partners supporting their work," a Meta blog said. "We're partnering with companies including Accenture, Amazon Web Services, Anduril, Booz Allen, Databricks, Deloitte, IBM, Leidos, Lockheed Martin, Microsoft, Oracle, Palantir, Scale AI, and Snowflake to bring Llama to government agencies." Because Meta's models are open-source, they "can easily be used by the government to support Musk's goals without the company's explicit consent," Wired suggested. It's hard to track where Meta's models may have been deployed in government so far, and it's unclear why DOGE relied on Llama 2 when Meta has made advancements with Llama 3 and 4. Not much is known about DOGE's use of Llama 2. Wired's review of records showed that DOGE deployed the model locally, "meaning it's unlikely to have sent data over the Internet," which was a privacy concern that many government workers expressed. In an April letter sent to Russell Vought, director of the Office of Management and Budget, more than 40 lawmakers demanded a probe into DOGE's AI use, which, they warned -- alongside "serious security risks" -- could "have the potential to undermine successful and appropriate AI adoption." That letter called out a DOGE staffer and former SpaceX employee who supposedly used Musk's xAI Grok-2 model to create an "AI assistant," as well as the use of a chatbot named "GSAi" -- "based on Anthropic and Meta models" -- to analyze contract and procurement data. DOGE has also been linked to a software called AutoRIF that supercharges mass firings across the government. In particular, the letter emphasized the "major concerns about security" swirling DOGE's use of "AI systems to analyze emails from a large portion of the two million person federal workforce describing their previous week's accomplishments," which they said lacked transparency. Those emails came weeks after the "Fork in the Road" emails, Wired noted, asking workers to outline weekly accomplishments in five bullet points. Workers fretted over responses, worried that DOGE might be asking for sensitive information without security clearances, Wired reported. Wired could not confirm if Llama 2 was also used to parse these email responses, but federal workers told Wired that if DOGE was "smart," then they'd likely "reuse their code" from the "Fork in the Road" email experiment. Why didn't DOGE use Grok? It seems that Grok, Musk's AI model, wasn't available for DOGE's task because it was only available as a proprietary model in January. Moving forward, DOGE may rely more frequently on Grok, Wired reported, as Microsoft announced it would start hosting xAI's Grok 3 models in its Azure AI Foundry this week, The Verge reported, which opens the models up for more uses. In their letter, lawmakers urged Vought to investigate Musk's conflicts of interest, while warning of potential data breaches and declaring that AI, as DOGE had used it, was not ready for government. "Without proper protections, feeding sensitive data into an AI system puts it into the possession of a system's operator -- a massive breach of public and employee trust and an increase in cybersecurity risks surrounding that data," lawmakers argued. "Generative AI models also frequently make errors and show significant biases -- the technology simply is not ready for use in high-risk decision-making without proper vetting, transparency, oversight, and guardrails in place." Although Wired's report seems to confirm that DOGE did not send sensitive data from the "Fork in the Road" emails to an external source, lawmakers want much more vetting of AI systems to deter "the risk of sharing personally identifiable or otherwise sensitive information with the AI model deployers." A seeming fear is that Musk may start using his own models more, benefiting from government data his competitors cannot access, while potentially putting that data at risk of a breach. They're hoping that DOGE will be forced to unplug all its AI systems, but Vought seems more aligned with DOGE, writing in his AI guidance for federal use that "agencies must remove barriers to innovation and provide the best value for the taxpayer." "While we support the federal government integrating new, approved AI technologies that can improve efficiency or efficacy, we cannot sacrifice security, privacy, and appropriate use standards when interacting with federal data," their letter said. "We also cannot condone use of AI systems, often known for hallucinations and bias, in decisions regarding termination of federal employment or federal funding without sufficient transparency and oversight of those models -- the risk of losing talent and critical research because of flawed technology or flawed uses of such technology is simply too high."
[2]
DOGE Used Meta AI Model to Review Emails From Federal Workers
Materials viewed by WIRED show that DOGE affiliates within the Office of Personnel Management (OPM) tested and used Meta's Llama 2 model to review and classify responses from federal workers to the infamous "Fork in the Road" email that was sent across the government in late January. The email offered deferred resignation to anyone opposed to changes the Trump administration was making to its federal workforce, including an enforced return to office policy, downsizing, and a requirement to be "loyal." To leave their position, recipients merely needed to reply with the word "resign." This email closely mirrored one that Musk sent to Twitter employees shortly after he took over the company in 2022. Records show Llama was deployed to sort through email responses from federal workers to determine how many accepted the offer. The model appears to have run locally, according to materials viewed by WIRED, meaning it's unlikely to have sent data over the internet. Meta and OPM did not respond to requests for comment from WIRED. Meta CEO Mark Zuckerberg appeared alongside other Silicon Valley tech leaders like Musk and Amazon founder Jeff Bezos at Trump's inauguration in January, but little has been publicly known about his company's tech being used in government. Because of Llama's open-source nature, the tool can easily be used by the government to support Musk's goals without the company's explicit consent. Soon after Trump took office in January, DOGE operatives burrowed into OPM, an independent agency that essentially serves as human resources for the entire federal government. The new administration's first big goal for the agency was to create a government-wide email service, according to current and former OPM employees. Riccardo Biasini, a former Tesla engineer, was involved in building the infrastructure for the service that would send out the original "Fork in the Road" email, according to material viewed by WIRED and reviewed by two government tech workers. In late February, weeks after the Fork email, OPM sent out another request to all government workers and asked them to submit five bullet points outlining what they accomplished each week. These emails threw a number of agencies into chaos, with workers unsure how to manage email responses that had to be mindful of security clearances and sensitive information. (Adding to the confusion, it has been reported that some workers who turned on read receipts say they found that the responses weren't actually being opened.) In February, NBC News reported that these emails were expected to go into an AI system for analysis. While the materials seen by WIRED do not explicitly show DOGE affiliates analyzing these weekly "five points" emails with Meta's Llama models, the way they did with the Fork emails, it wouldn't be difficult for them to do so, two federal workers tell WIRED.
[3]
Exclusive: Musk's DOGE expanding his Grok AI in U.S. government, raising conflict concerns
May 23 (Reuters) - Billionaire Elon Musk's DOGE team is expanding use of his artificial intelligence chatbot Grok in the U.S. federal government to analyze data, said three people familiar with the matter, potentially violating conflict-of-interest laws and putting at risk sensitive information on millions of Americans. Such use of Grok could reinforce concerns among privacy advocates and others that Musk's Department of Government Efficiency team appears to be casting aside long-established protections over the handling of sensitive data as President Donald Trump shakes up the U.S. bureaucracy. One of the three people familiar with the matter, who has knowledge of DOGE's activities, said Musk's team was using a customized version of the Grok chatbot. The apparent aim was for DOGE to sift through data more efficiently, this person said. "They ask questions, get it to prepare reports, give data analysis." The second and third person said DOGE staff also told Department of Homeland Security officials to use it even though Grok had not been approved within the department. Reuters could not determine the specific data that had been fed into the generative AI tool or how the custom system was set up. Grok was developed by xAI, a tech operation that Musk launched in 2023 on his social media platform, X. If the data was sensitive or confidential government information, the arrangement could violate security and privacy laws, said five specialists in technology and government ethics. It could also give the Tesla and SpaceX CEO access to valuable nonpublic federal contracting data at agencies he privately does business with or be used to help train Grok, a process in which AI models analyze troves of data, the experts said. Musk could also gain an unfair competitive advantage over other AI service providers from use of Grok in the federal government, they added. Musk, the White House and xAI did not respond to requests for comment. A Homeland Security spokesperson denied DOGE had pressed DHS staff to use Grok. "DOGE hasn't pushed any employees to use any particular tools or products," said the spokesperson, who did not respond to further questions. "DOGE is here to find and fight waste, fraud and abuse." Musk's xAI, an industry newcomer compared to rivals OpenAI and Anthropic, says on its website that it may monitor Grok users for "specific business purposes." "AI's knowledge should be all-encompassing and as far-reaching as possible," the website says. As part of Musk's stated push to eliminate government waste and inefficiency, the billionaire and his DOGE team have accessed heavily safeguarded federal databases that store personal information on millions of Americans. Experts said that data is typically off limits to all but a handful of officials because of the risk that it could be sold, lost, leaked, violate the privacy of Americans or expose the country to security threats. Typically, data sharing within the federal government requires agency authorization and the involvement of government specialists to ensure compliance with privacy, confidentiality and other laws. Analyzing sensitive federal data with Grok would mark an important shift in the work of DOGE, a team of software engineers and others connected to Musk. They have overseen the firing of thousands of federal workers, seized control of sensitive data systems and sought to dismantle agencies in the name of combating alleged waste, fraud and abuse. "Given the scale of data that DOGE has amassed and given the numerous concerns of porting that data into software like Grok, this to me is about as serious a privacy threat as you get," said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, a nonprofit that advocates for privacy. His concerns include the risk that government data will leak back to xAI, a private company, and a lack of clarity over who has access to this custom version of Grok. DOGE's access to federal information could give Grok and xAI an edge over other potential AI contractors looking to provide government services, said Cary Coglianese, an expert on federal regulations and ethics at the University of Pennsylvania. "The company has a financial interest in insisting that their product be used by federal employees," he said. "APPEARANCE OF SELF-DEALING" In addition to using Grok for its own analysis of government data, DOGE staff told DHS officials over the last two months to use Grok even though it had not been approved for use at the sprawling agency, said the second and third person. DHS oversees border security, immigration enforcement, cybersecurity and other sensitive national security functions. If federal employees are officially given access to Grok for such use, the federal government has to pay Musk's organization for access, the people said. "They were pushing it to be used across the department," said one of the people. Reuters could not independently establish if and how much the federal government would have been charged to use Grok. Reporters also couldn't determine if DHS workers followed the directive by DOGE staff to use Grok or ignored the request. DHS, under the previous Biden administration, created policies last year allowing its staff to use specific AI platforms, including OpenAI's ChatGPT, the Claude chatbot developed by Anthropic and another AI tool developed by Grammarly. DHS also created an internal DHS chatbot. The aim was to make DHS among the first federal agencies to embrace the technology and use generative AI, which can write research reports and carry out other complex tasks in response to prompts. Under the policy, staff could use the commercial bots for non-sensitive, non-confidential data, while DHS's internal bot could be fed more sensitive data, records posted on DHS's website show. In May, DHS officials abruptly shut down employee access to all commercial AI tools - including ChatGPT - after workers were suspected of improperly using them with sensitive data, said the second and third sources. Instead, staff can still use the internal DHS AI tool. Reuters could not determine whether this prevented DOGE from promoting Grok at DHS. DHS did not respond to questions about the matter. Musk, the world's richest person, told investors last month that he would reduce his time with DOGE to a day or two a week starting in May. As a special government employee, he can only serve for 130 days. It's unclear when that term ends. If he reduces his hours to part time, he could extend his term beyond May. He has said, however, that his DOGE team will continue with their work as he winds down his role at the White House. If Musk was directly involved in decisions to use Grok, it could violate a criminal conflict-of-interest statute which bars officials -- including special government employees -- from participating in matters that could benefit them financially, said Richard Painter, ethics counsel to former Republican President George W. Bush and a University of Minnesota professor. "This gives the appearance that DOGE is pressuring agencies to use software to enrich Musk and xAI, and not to the benefit of the American people," said Painter. The statute is rarely prosecuted but can result in fines or jail time. If DOGE staffers were pushing Grok's use without Musk's involvement, for instance to ingratiate themselves with the billionaire, that would be ethically problematic but not a violation of the conflict-of-interest statute, said Painter. "We can't prosecute it, but it would be the job of the White House to prevent it. It gives the appearance of self-dealing." The push to use Grok coincides with a larger DOGE effort led by two staffers on Musk's team, Kyle Schutt and Edward Coristine, to use AI in the federal bureaucracy, said two other people familiar with DOGE's operations. Coristine, a 19-year-old who has used the online moniker "Big Balls," is one of DOGE's highest-profile members. Schutt and Coristine did not respond to requests for comment. DOGE staffers have attempted to gain access to DHS employee emails in recent months and ordered staff to train AI to identify communications suggesting an employee is not "loyal" to Trump's political agenda, the two sources said. Reuters could not establish whether Grok was used for such surveillance. In the last few weeks, a group of roughly a dozen workers at a Department of Defense agency were told by a supervisor that an algorithmic tool was monitoring some of their computer activity, according to two additional people briefed on the conversations. Reuters also reviewed two separate text message exchanges by people who were directly involved in the conversations. The sources asked that the specific agency not be named out of concern over potential retribution. They were not aware of what tool was being used. Using AI to identify the personal political beliefs of employees could violate civil service laws aimed at shielding career civil servants from political interference, said Coglianese, the expert on federal regulations and ethics at the University of Pennsylvania. In a statement to Reuters, the Department of Defense said the department's DOGE team had not been involved in any network monitoring nor had DOGE been "directed" to use any AI tools, including Grok. "It's important to note that all government computers are inherently subject to monitoring as part of the standard user agreement," said Kingsley Wilson, a Pentagon spokesperson. The department did not respond to follow-up questions about whether any new monitoring systems had been deployed recently. Additional reporting by Jeffrey Dastin and Alexandra Alper. Editing by Jason Szep Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Artificial IntelligenceADAS, AV & SafetySoftware-Defined VehicleSustainable & EV Supply ChainWorkforce Marisa Taylor Thomson Reuters Marisa Taylor, a Pulitzer Prize-winning investigative reporter, has more than two decades of experience covering business, healthcare, the Justice Department, and national security. As a Washington, D.C.-based reporter, she helped break the Panama Papers, which exposed offshore companies linked to more than 140 politicians. Taylor was also part of a team that exposed the CIA's monitoring of Senate Intelligence Committee staff. She previously reported out of Texas, California, Virginia and Mexico. https://www.pulitzer.org/winners/staff-reuters https://www.reuters.com/authors/marisa-taylor/ Alexandra Ulmer Thomson Reuters Alexandra covers the 2024 U.S. presidential race, with a focus on Republicans, donors and AI. Previously, she spent four years in Venezuela reporting on the humanitarian crisis and investigating corruption. She has also worked in India, Chile and Argentina. Alexandra was Reuters' Reporter of the Year and has won an Overseas Press Club award.
[4]
Musk's DOGE expanding his Grok AI in U.S. government, raising conflict concerns: Reuters
Billionaire Elon Musk's DOGE team is expanding use of his artificial intelligence chatbot Grok in the U.S. federal government to analyze data, three people familiar with the matter told Reuters, potentially violating conflict-of-interest laws and putting at risk sensitive information on millions of Americans. Such use of Grok could reinforce concerns among privacy advocates and others that Musk's Department of Government Efficiency team appears to be casting aside long-established protections over the handling of sensitive data as President Donald Trump shakes up the U.S. bureaucracy. One of the three people familiar with the matter, who has knowledge of DOGE's activities, said Musk's team was using a customized version of the Grok chatbot. The apparent aim was for DOGE to sift through data more efficiently, this person said. "They ask questions, get it to prepare reports, give data analysis." The second and third person said DOGE staff also told Department of Homeland Security officials to use it even though Grok had not been approved within the department. Reuters could not determine the specific data that had been fed into the generative AI tool or how the custom system was set up. Grok was developed by xAI, a tech operation that Musk launched in 2023 on his social media platform, X. If the data was sensitive or confidential government information, the arrangement could violate security and privacy laws, said five specialists in technology and government ethics. It could also give the Tesla and SpaceX CEO access to valuable nonpublic federal contracting data at agencies he privately does business with or be used to help train Grok, a process in which AI models analyze troves of data, the experts said. Musk could also gain an unfair competitive advantage over other AI service providers from use of Grok in the federal government, they added. Musk, the White House and xAI did not respond to requests for comment. A Homeland Security spokesperson denied DOGE had pressed DHS staff to use Grok. "DOGE hasn't pushed any employees to use any particular tools or products," said the spokesperson, who did not respond to further questions. "DOGE is here to find and fight waste, fraud and abuse." Musk's xAI, an industry newcomer compared to rivals OpenAI and Anthropic, says on its website that it may monitor Grok users for "specific business purposes." "AI's knowledge should be all-encompassing and as far-reaching as possible," the website says. As part of Musk's stated push to eliminate government waste and inefficiency, the billionaire and his DOGE team have accessed heavily safeguarded federal databases that store personal information on millions of Americans. Experts said that data is typically off limits to all but a handful of officials because of the risk that it could be sold, lost, leaked, violate the privacy of Americans or expose the country to security threats. Typically, data sharing within the federal government requires agency authorization and the involvement of government specialists to ensure compliance with privacy, confidentiality and other laws. Analyzing sensitive federal data with Grok would mark an important shift in the work of DOGE, a team of software engineers and others connected to Musk. They have overseen the firing of thousands of federal workers, seized control of sensitive data systems and sought to dismantle agencies in the name of combating alleged waste, fraud and abuse. "Given the scale of data that DOGE has amassed and given the numerous concerns of porting that data into software like Grok, this to me is about as serious a privacy threat as you get," said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, a nonprofit that advocates for privacy. His concerns include the risk that government data will leak back to xAI, a private company, and a lack of clarity over who has access to this custom version of Grok. DOGE's access to federal information could give Grok and xAI an edge over other potential AI contractors looking to provide government services, said Cary Coglianese, an expert on federal regulations and ethics at the University of Pennsylvania. "The company has a financial interest in insisting that their product be used by federal employees," he said.
[5]
Elon's DOGE Is Reportedly Using Grok AI With Government Data
Reuters reports that Elon Musk's annoying chatbot, Grok, is now being used by the U.S. government. While the extent and nature of that usage is unclear, sources interviewed by the news outlet have expressed alarm at the implications of the chatbot's access to government data. Grok was launched by xAI, an AI company founded by Musk in 2023, and has since become integrated into Musk's social media platform, X. The chatbot is known to summarize information in the most cringe-inducing manner possible, and was originally fashioned as an "anti-woke" antidote to ChatGPT and other more politically correct applications (though it's turned out to be too woke for conservatives anyway). Musk's Department of Government Efficiency team is now using a customized version of Grok, with the apparent goal of sorting and analyzing tranches of data. The team may also be using the chatbot to prepare reports, sources told the outlet. Aside from the very obvious data privacy concerns raised by Grok's integration with government data, it appears that, once again, Musk is at the center of a conflict-of-interest violation. In fact, Reuters characterizes the promotion of Grok as a potentially criminal transgression of federal regulations. The outlet writes: If Musk was directly involved in decisions to use Grok, it could violate a criminal conflict-of-interest statute which bars officials -- including special government employees -- from participating in matters that could benefit them financially, said Richard Painter, ethics counsel to former Republican President George W. Bush and a University of Minnesota professor. “This gives the appearance that DOGE is pressuring agencies to use software to enrich Musk and xAI, and not to the benefit of the American people,†said Painter. The statute is rarely prosecuted but can result in fines or jail time. Yes, but how many times have we heard that one before? Elon has conflicts of interest up the wazoo. He is a walking conflict of interest, at this point. To my knowledge, he's never seen the interior of a courtroom and, unless he gets caught with a dead body or something, it seems doubtful he ever will. Ever since Musk helped Trump get re-elected with hundreds of millions from his own piggybank, he's has been treating the U.S. government like his personal plaything to destroy. Everywhere you look, the billionaire appears to be benefiting from his work with the government, whether it's the White House bullying tariffed countries to adopt services from the billionaire’s satellite internet company, Starlink, or a new report that shows the billionaire’s companies may have saved nearly $2.37 billion from federal fines and penalties that were active under Biden but have since been “neutralized†in the Trump era. As far as DOGE's mandate goes, the organization has been an unmitigated failure. It has barely saved a fraction of the money that Musk initially claimed that it would and, in the long term, the cuts are likely to cost Americans money, since many of them have been to important agencies that dispense key services to Americans.
[6]
Musk pushes Grok AI on US government, raising ethics issues
Gift 5 articles to anyone you choose each month when you subscribe. Billionaire Elon Musk's DOGE team is expanding use of his artificial intelligence chatbot Grok in the US federal government to analyse data, said three people familiar with the matter, potentially violating conflict-of-interest laws and putting at risk sensitive information on millions of Americans. Such use of Grok could reinforce concerns among privacy advocates and others that Musk's Department of Government Efficiency team appears to be casting aside long-established protections over the handling of sensitive data as President Donald Trump shakes up the US bureaucracy.
[7]
Exclusive-Musk's DOGE Expanding His Grok AI in U.S. Government, Raising Conflict Concerns
(Reuters) -Billionaire Elon Musk's DOGE team is expanding use of his artificial intelligence chatbot Grok in the U.S. federal government to analyze data, said three people familiar with the matter, potentially violating conflict-of-interest laws and putting at risk sensitive information on millions of Americans. Such use of Grok could reinforce concerns among privacy advocates and others that Musk's Department of Government Efficiency team appears to be casting aside long-established protections over the handling of sensitive data as President Donald Trump shakes up the U.S. bureaucracy. One of the three people familiar with the matter, who has knowledge of DOGE's activities, said Musk's team was using a customized version of the Grok chatbot. The apparent aim was for DOGE to sift through data more efficiently, this person said. "They ask questions, get it to prepare reports, give data analysis." The second and third person said DOGE staff also told Department of Homeland Security officials to use it even though Grok had not been approved within the department. Reuters could not determine the specific data that had been fed into the generative AI tool or how the custom system was set up. Grok was developed by xAI, a tech operation that Musk launched in 2023 on his social media platform, X. If the data was sensitive or confidential government information, the arrangement could violate security and privacy laws, said five specialists in technology and government ethics. It could also give the Tesla and SpaceX CEO access to valuable nonpublic federal contracting data at agencies he privately does business with or be used to help train Grok, a process in which AI models analyze troves of data, the experts said. Musk could also gain an unfair competitive advantage over other AI service providers from use of Grok in the federal government, they added. Musk, the White House and xAI did not respond to requests for comment. A Homeland Security spokesperson denied DOGE had pressed DHS staff to use Grok. "DOGE hasn't pushed any employees to use any particular tools or products," said the spokesperson, who did not respond to further questions. "DOGE is here to find and fight waste, fraud and abuse." Musk's xAI, an industry newcomer compared to rivals OpenAI and Anthropic, says on its website that it may monitor Grok users for "specific business purposes." "AI's knowledge should be all-encompassing and as far-reaching as possible," the website says. As part of Musk's stated push to eliminate government waste and inefficiency, the billionaire and his DOGE team have accessed heavily safeguarded federal databases that store personal information on millions of Americans. Experts said that data is typically off limits to all but a handful of officials because of the risk that it could be sold, lost, leaked, violate the privacy of Americans or expose the country to security threats. Typically, data sharing within the federal government requires agency authorization and the involvement of government specialists to ensure compliance with privacy, confidentiality and other laws. Analyzing sensitive federal data with Grok would mark an important shift in the work of DOGE, a team of software engineers and others connected to Musk. They have overseen the firing of thousands of federal workers, seized control of sensitive data systems and sought to dismantle agencies in the name of combating alleged waste, fraud and abuse. "Given the scale of data that DOGE has amassed and given the numerous concerns of porting that data into software like Grok, this to me is about as serious a privacy threat as you get," said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, a nonprofit that advocates for privacy. His concerns include the risk that government data will leak back to xAI, a private company, and a lack of clarity over who has access to this custom version of Grok. DOGE's access to federal information could give Grok and xAI an edge over other potential AI contractors looking to provide government services, said Cary Coglianese, an expert on federal regulations and ethics at the University of Pennsylvania. "The company has a financial interest in insisting that their product be used by federal employees," he said. "APPEARANCE OF SELF-DEALING" In addition to using Grok for its own analysis of government data, DOGE staff told DHS officials over the last two months to use Grok even though it had not been approved for use at the sprawling agency, said the second and third person. DHS oversees border security, immigration enforcement, cybersecurity and other sensitive national security functions. If federal employees are officially given access to Grok for such use, the federal government has to pay Musk's organization for access, the people said. "They were pushing it to be used across the department," said one of the people. Reuters could not independently establish if and how much the federal government would have been charged to use Grok. Reporters also couldn't determine if DHS workers followed the directive by DOGE staff to use Grok or ignored the request. DHS, under the previous Biden administration, created policies last year allowing its staff to use specific AI platforms, including OpenAI's ChatGPT, the Claude chatbot developed by Anthropic and another AI tool developed by Grammarly. DHS also created an internal DHS chatbot. The aim was to make DHS among the first federal agencies to embrace the technology and use generative AI, which can write research reports and carry out other complex tasks in response to prompts. Under the policy, staff could use the commercial bots for non-sensitive, non-confidential data, while DHS's internal bot could be fed more sensitive data, records posted on DHS's website show. In May, DHS officials abruptly shut down employee access to all commercial AI tools - including ChatGPT - after workers were suspected of improperly using them with sensitive data, said the second and third sources. Instead, staff can still use the internal DHS AI tool. Reuters could not determine whether this prevented DOGE from promoting Grok at DHS. DHS did not respond to questions about the matter. Musk, the world's richest person, told investors last month that he would reduce his time with DOGE to a day or two a week starting in May. As a special government employee, he can only serve for 130 days. It's unclear when that term ends. If he reduces his hours to part time, he could extend his term beyond May. He has said, however, that his DOGE team will continue with their work as he winds down his role at the White House. If Musk was directly involved in decisions to use Grok, it could violate a criminal conflict-of-interest statute which bars officials -- including special government employees -- from participating in matters that could benefit them financially, said Richard Painter, ethics counsel to former Republican President George W. Bush and a University of Minnesota professor. "This gives the appearance that DOGE is pressuring agencies to use software to enrich Musk and xAI, and not to the benefit of the American people," said Painter. The statute is rarely prosecuted but can result in fines or jail time. If DOGE staffers were pushing Grok's use without Musk's involvement, for instance to ingratiate themselves with the billionaire, that would be ethically problematic but not a violation of the conflict-of-interest statute, said Painter. "We can't prosecute it, but it would be the job of the White House to prevent it. It gives the appearance of self-dealing." The push to use Grok coincides with a larger DOGE effort led by two staffers on Musk's team, Kyle Schutt and Edward Coristine, to use AI in the federal bureaucracy, said two other people familiar with DOGE's operations. Coristine, a 19-year-old who has used the online moniker "Big Balls," is one of DOGE's highest-profile members. Schutt and Coristine did not respond to requests for comment. DOGE staffers have attempted to gain access to DHS employee emails in recent months and ordered staff to train AI to identify communications suggesting an employee is not "loyal" to Trump's political agenda, the two sources said. Reuters could not establish whether Grok was used for such surveillance. In the last few weeks, a group of roughly a dozen workers at a Department of Defense agency were told by a supervisor that an algorithmic tool was monitoring some of their computer activity, according to two additional people briefed on the conversations. Reuters also reviewed two separate text message exchanges by people who were directly involved in the conversations. The sources asked that the specific agency not be named out of concern over potential retribution. They were not aware of what tool was being used. Using AI to identify the personal political beliefs of employees could violate civil service laws aimed at shielding career civil servants from political interference, said Coglianese, the expert on federal regulations and ethics at the University of Pennsylvania. In a statement to Reuters, the Department of Defense said the department's DOGE team had not been involved in any network monitoring nor had DOGE been "directed" to use any AI tools, including Grok. "It's important to note that all government computers are inherently subject to monitoring as part of the standard user agreement," said Kingsley Wilson, a Pentagon spokesperson. The department did not respond to follow-up questions about whether any new monitoring systems had been deployed recently. (Additional reporting by Jeffrey Dastin and Alexandra Alper. Editing by Jason Szep)
[8]
Elon Musk's DOGE expanding his Grok AI in U.S. government, raising conflict concerns
Elon Musk's DOGE team is reportedly expanding the use of his AI chatbot Grok within the U.S. federal government for data analysis, raising concerns about potential conflicts of interest and the security of sensitive information. DOGE staff allegedly encouraged Homeland Security officials to use Grok, even without departmental approval, sparking worries about privacy violations and unfair advantages for Musk's xAI.Billionaire Elon Musk's DOGE team is expanding use of his artificial intelligence chatbot Grok in the U.S. federal government to analyze data, said three people familiar with the matter, potentially violating conflict-of-interest laws and putting at risk sensitive information on millions of Americans. Such use of Grok could reinforce concerns among privacy advocates and others that Musk's Department of Government Efficiency team appears to be casting aside long-established protections over the handling of sensitive data as President Donald Trump shakes up the U.S. bureaucracy. One of the three people familiar with the matter, who has knowledge of DOGE's activities, said Musk's team was using a customized version of the Grok chatbot. The apparent aim was for DOGE to sift through data more efficiently, this person said. "They ask questions, get it to prepare reports, give data analysis." The second and third person said DOGE staff also told Department of Homeland Security officials to use it even though Grok had not been approved within the department. Reuters could not determine the specific data that had been fed into the generative AI tool or how the custom system was set up. Grok was developed by xAI, a tech operation that Musk launched in 2023 on his social media platform, X. If the data was sensitive or confidential government information, the arrangement could violate security and privacy laws, said five specialists in technology and government ethics. It could also give the Tesla and SpaceX CEO access to valuable nonpublic federal contracting data at agencies he privately does business with or be used to help train Grok, a process in which AI models analyze troves of data, the experts said. Musk could also gain an unfair competitive advantage over other AI service providers from use of Grok in the federal government, they added. Musk, the White House and xAI did not respond to requests for comment. A Homeland Security spokesperson denied DOGE had pressed DHS staff to use Grok. "DOGE hasn't pushed any employees to use any particular tools or products," said the spokesperson, who did not respond to further questions. "DOGE is here to find and fight waste, fraud and abuse." Musk's xAI, an industry newcomer compared to rivals OpenAI and Anthropic, says on its website that it may monitor Grok users for "specific business purposes." "AI's knowledge should be all-encompassing and as far-reaching as possible," the website says. As part of Musk's stated push to eliminate government waste and inefficiency, the billionaire and his DOGE team have accessed heavily safeguarded federal databases that store personal information on millions of Americans. Experts said that data is typically off limits to all but a handful of officials because of the risk that it could be sold, lost, leaked, violate the privacy of Americans or expose the country to security threats. Typically, data sharing within the federal government requires agency authorization and the involvement of government specialists to ensure compliance with privacy, confidentiality and other laws. Analyzing sensitive federal data with Grok would mark an important shift in the work of DOGE, a team of software engineers and others connected to Musk. They have overseen the firing of thousands of federal workers, seized control of sensitive data systems and sought to dismantle agencies in the name of combating alleged waste, fraud and abuse. "Given the scale of data that DOGE has amassed and given the numerous concerns of porting that data into software like Grok, this to me is about as serious a privacy threat as you get," said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, a nonprofit that advocates for privacy. His concerns include the risk that government data will leak back to xAI, a private company, and a lack of clarity over who has access to this custom version of Grok. DOGE's access to federal information could give Grok and xAI an edge over other potential AI contractors looking to provide government services, said Cary Coglianese, an expert on federal regulations and ethics at the University of Pennsylvania. "The company has a financial interest in insisting that their product be used by federal employees," he said. In addition to using Grok for its own analysis of government data, DOGE staff told DHS officials over the last two months to use Grok even though it had not been approved for use at the sprawling agency, said the second and third person. DHS oversees border security, immigration enforcement, cybersecurity and other sensitive national security functions. If federal employees are officially given access to Grok for such use, the federal government has to pay Musk's organization for access, the people said. "They were pushing it to be used across the department," said one of the people. Reuters could not independently establish if and how much the federal government would have been charged to use Grok. Reporters also couldn't determine if DHS workers followed the directive by DOGE staff to use Grok or ignored the request. DHS, under the previous Biden administration, created policies last year allowing its staff to use specific AI platforms, including OpenAI's ChatGPT, the Claude chatbot developed by Anthropic and another AI tool developed by Grammarly. DHS also created an internal DHS chatbot. The aim was to make DHS among the first federal agencies to embrace the technology and use generative AI, which can write research reports and carry out other complex tasks in response to prompts. Under the policy, staff could use the commercial bots for non-sensitive, non-confidential data, while DHS's internal bot could be fed more sensitive data, records posted on DHS's website show. In May, DHS officials abruptly shut down employee access to all commercial AI tools - including ChatGPT - after workers were suspected of improperly using them with sensitive data, said the second and third sources. Instead, staff can still use the internal DHS AI tool. Reuters could not determine whether this prevented DOGE from promoting Grok at DHS. DHS did not respond to questions about the matter. Musk, the world's richest person, told investors last month that he would reduce his time with DOGE to a day or two a week starting in May. As a special government employee, he can only serve for 130 days. It's unclear when that term ends. If he reduces his hours to part time, he could extend his term beyond May. He has said, however, that his DOGE team will continue with their work as he winds down his role at the White House. If Musk was directly involved in decisions to use Grok, it could violate a criminal conflict-of-interest statute which bars officials -- including special government employees -- from participating in matters that could benefit them financially, said Richard Painter, ethics counsel to former Republican President George W. Bush and a University of Minnesota professor. "This gives the appearance that DOGE is pressuring agencies to use software to enrich Musk and xAI, and not to the benefit of the American people," said Painter. The statute is rarely prosecuted but can result in fines or jail time. If DOGE staffers were pushing Grok's use without Musk's involvement, for instance to ingratiate themselves with the billionaire, that would be ethically problematic but not a violation of the conflict-of-interest statute, said Painter. "We can't prosecute it, but it would be the job of the White House to prevent it. It gives the appearance of self-dealing." The push to use Grok coincides with a larger DOGE effort led by two staffers on Musk's team, Kyle Schutt and Edward Coristine, to use AI in the federal bureaucracy, said two other people familiar with DOGE's operations. Coristine, a 19-year-old who has used the online moniker "Big Balls," is one of DOGE's highest-profile members. Schutt and Coristine did not respond to requests for comment. DOGE staffers have attempted to gain access to DHS employee emails in recent months and ordered staff to train AI to identify communications suggesting an employee is not "loyal" to Trump's political agenda, the two sources said. Reuters could not establish whether Grok was used for such surveillance. In the last few weeks, a group of roughly a dozen workers at a Department of Defense agency were told by a supervisor that an algorithmic tool was monitoring some of their computer activity, according to two additional people briefed on the conversations. Reuters also reviewed two separate text message exchanges by people who were directly involved in the conversations. The sources asked that the specific agency not be named out of concern over potential retribution. They were not aware of what tool was being used. Using AI to identify the personal political beliefs of employees could violate civil service laws aimed at shielding career civil servants from political interference, said Coglianese, the expert on federal regulations and ethics at the University of Pennsylvania. In a statement to Reuters, the Department of Defense said the department's DOGE team had not been involved in any network monitoring nor had DOGE been "directed" to use any AI tools, including Grok. "It's important to note that all government computers are inherently subject to monitoring as part of the standard user agreement," said Kingsley Wilson, a Pentagon spokesperson. The department did not respond to follow-up questions about whether any new monitoring systems had been deployed recently.
[9]
Elon Musk's Grok AI Is Quietly Being Deployed Across US Government Agencies -- Experts Warn It Could Breach Privacy Laws And Create Serious Conflicts Of Interest: Report - Tesla (NASDAQ:TSLA)
Elon Musk's artificial intelligence chatbot, Grok, is reportedly being quietly used within U.S. government agencies under the direction of his Department of Government Efficiency (DOGE) team. What Happened: The AI tool, developed by Musk's company xAI, is being used to analyze federal data, prepare reports, and assist agency operations, including at the Department of Homeland Security (DHS), despite lacking formal approval for such use, reported Reuters, citing multiple sources familiar with the matter. DOGE staff reportedly also promoted Grok to DHS officials over the past two months. A spokesperson for the DHS denied claims that DOGE had pressured DHS employees to adopt Grok, an AI tool. "DOGE is here to find and fight waste, fraud and abuse," the spokesperson told the publication. The AI system's deployment has sparked warnings from legal and ethics experts who argue that using Grok without proper clearance could expose protected government data and violate conflict-of-interest laws, the report added. See Also: Here's How Many Vehicles Tesla Has Delivered, Produced In Each Quarter Since 2019 "This gives the appearance that DOGE is pressuring agencies to use software to enrich Musk and xAI, and not to the benefit of the American people," Richard Painter, ethics counsel to former President George W. Bush, told the outlet. Privacy experts also raised concerns that federal data could leak back into xAI's systems or be used to give Musk a competitive edge, like giving the Tesla Inc. TSLA and SpaceX CEO access to sensitive, nonpublic federal contracting information from agencies he does business with privately, the report said. Why It's Important: Musk's role with DOGE sparked debate since his appointment. While Musk has pushed for cost-cutting measures, Treasury data reportedly showed federal spending has risen by $154 billion since President Donald Trump returned to office. Earlier this month, Musk reflected on his first 100 days at DOGE, expressing mixed feelings. He acknowledged some progress but admitted, "We haven't been as effective as I'd like." In February, privacy concerns emerged when Musk's DOGE team sought access to taxpayer data, sparking apprehensions among lawmakers. In Tesla's first-quarter earnings call last month, Musk revealed plans to scale back his involvement with DOGE amid growing investor concerns that his political engagement is diverting attention from Tesla's core business. Meanwhile, on Wednesday, the Trump administration attempted to block the release of DOGE records, appealing to the Supreme Court to prevent the disclosure of operational documents to a federal watchdog group. Price Action: Tesla shares rose 0.15% in after-hours trading, reaching $339.86, at the time of writing, according to Benzinga Pro. The EV giant currently holds a growth score of 91.95%, per Benzinga Edge Stock Rankings. Click here to compare Tesla with other top-performing stocks. Read Next: Tesla's 2025 Dark Chapter Over, Analyst Says AI, Autonomous To Lift Valuation, End 'Black Cloud' Over Stock Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Photo courtesy: JRdes / Shutterstock.com TSLATesla Inc$339.86-0.35%Stock Score Locked: Want to See it? Benzinga Rankings give you vital metrics on any stock - anytime. Reveal Full ScoreEdge RankingsMomentum92.93Growth91.95Quality86.30Value8.76Price TrendShortMediumLongOverviewMarket News and Data brought to you by Benzinga APIs
[10]
Musk's DOGE expanding his Grok AI in U.S. government, raising conflict concerns: Reuters exclusive
Billionaire Elon Musk's DOGE team is expanding use of his artificial intelligence chatbot Grok in the U.S. federal government to analyze data, said three people familiar with the matter, potentially violating conflict-of-interest laws and putting at risk sensitive information on millions of Americans. Such use of Grok could reinforce concerns among privacy advocates and others that Musk's Department of Government Efficiency team appears to be casting aside long-established protections over the handling of sensitive data as President Donald Trump shakes up the U.S. bureaucracy. One of the three people familiar with the matter, who has knowledge of DOGE's activities, said Musk's team was using a customized version of the Grok chatbot. The apparent aim was for DOGE to sift through data more efficiently, this person said. "They ask questions, get it to prepare reports, give data analysis." The second and third person said DOGE staff also told Department of Homeland Security officials to use it even though Grok had not been approved within the department. Reuters could not determine the specific data that had been fed into the generative AI tool or how the custom system was set up. Grok was developed by xAI, a tech operation that Musk launched in 2023 on his social media platform, X. If the data was sensitive or confidential government information, the arrangement could violate security and privacy laws, said five specialists in technology and government ethics. It could also give the Tesla and SpaceX CEO access to valuable nonpublic federal contracting data at agencies he privately does business with or be used to help train Grok, a process in which AI models analyze troves of data, the experts said. Musk could also gain an unfair competitive advantage over other AI service providers from use of Grok in the federal government, they added. Musk, the White House and xAI did not respond to requests for comment. A Homeland Security spokesperson denied DOGE had pressed DHS staff to use Grok. "DOGE hasn't pushed any employees to use any particular tools or products," said the spokesperson, who did not respond to further questions. "DOGE is here to find and fight waste, fraud and abuse." Musk's xAI, an industry newcomer compared to rivals OpenAI and Anthropic, says on its website that it may monitor Grok users for "specific business purposes." "AI's knowledge should be all-encompassing and as far-reaching as possible," the website says. As part of Musk's stated push to eliminate government waste and inefficiency, the billionaire and his DOGE team have accessed heavily safeguarded federal databases that store personal information on millions of Americans. Experts said that data is typically off limits to all but a handful of officials because of the risk that it could be sold, lost, leaked, violate the privacy of Americans or expose the country to security threats. Typically, data sharing within the federal government requires agency authorization and the involvement of government specialists to ensure compliance with privacy, confidentiality and other laws. Analyzing sensitive federal data with Grok would mark an important shift in the work of DOGE, a team of software engineers and others connected to Musk. They have overseen the firing of thousands of federal workers, seized control of sensitive data systems and sought to dismantle agencies in the name of combating alleged waste, fraud and abuse. "Given the scale of data that DOGE has amassed and given the numerous concerns of porting that data into software like Grok, this to me is about as serious a privacy threat as you get," said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, a nonprofit that advocates for privacy. His concerns include the risk that government data will leak back to xAI, a private company, and a lack of clarity over who has access to this custom version of Grok. DOGE's access to federal information could give Grok and xAI an edge over other potential AI contractors looking to provide government services, said Cary Coglianese, an expert on federal regulations and ethics at the University of Pennsylvania. "The company has a financial interest in insisting that their product be used by federal employees," he said. In addition to using Grok for its own analysis of government data, DOGE staff told DHS officials over the last two months to use Grok even though it had not been approved for use at the sprawling agency, said the second and third person. DHS oversees border security, immigration enforcement, cybersecurity and other sensitive national security functions. If federal employees are officially given access to Grok for such use, the federal government has to pay Musk's organization for access, the people said. "They were pushing it to be used across the department," said one of the people. Reuters could not independently establish if and how much the federal government would have been charged to use Grok. Reporters also couldn't determine if DHS workers followed the directive by DOGE staff to use Grok or ignored the request. DHS, under the previous Biden administration, created policies last year allowing its staff to use specific AI platforms, including OpenAI's ChatGPT, the Claude chatbot developed by Anthropic and another AI tool developed by Grammarly. DHS also created an internal DHS chatbot. The aim was to make DHS among the first federal agencies to embrace the technology and use generative AI, which can write research reports and carry out other complex tasks in response to prompts. Under the policy, staff could use the commercial bots for non-sensitive, non-confidential data, while DHS's internal bot could be fed more sensitive data, records posted on DHS's website show. In May, DHS officials abruptly shut down employee access to all commercial AI tools - including ChatGPT - after workers were suspected of improperly using them with sensitive data, said the second and third sources. Instead, staff can still use the internal DHS AI tool. Reuters could not determine whether this prevented DOGE from promoting Grok at DHS. DHS did not respond to questions about the matter. Musk, the world's richest person, told investors last month that he would reduce his time with DOGE to a day or two a week starting in May. As a special government employee, he can only serve for 130 days. It's unclear when that term ends. If he reduces his hours to part time, he could extend his term beyond May. He has said, however, that his DOGE team will continue with their work as he winds down his role at the White House. If Musk was directly involved in decisions to use Grok, it could violate a criminal conflict-of-interest statute which bars officials -- including special government employees -- from participating in matters that could benefit them financially, said Richard Painter, ethics counsel to former Republican President George W. Bush and a University of Minnesota professor. "This gives the appearance that DOGE is pressuring agencies to use software to enrich Musk and xAI, and not to the benefit of the American people," said Painter. The statute is rarely prosecuted but can result in fines or jail time. If DOGE staffers were pushing Grok's use without Musk's involvement, for instance to ingratiate themselves with the billionaire, that would be ethically problematic but not a violation of the conflict-of-interest statute, said Painter. "We can't prosecute it, but it would be the job of the White House to prevent it. It gives the appearance of self-dealing." The push to use Grok coincides with a larger DOGE effort led by two staffers on Musk's team, Kyle Schutt and Edward Coristine, to use AI in the federal bureaucracy, said two other people familiar with DOGE's operations. Coristine, a 19-year-old who has used the online moniker "Big Balls," is one of DOGE's highest-profile members. Schutt and Coristine did not respond to requests for comment. DOGE staffers have attempted to gain access to DHS employee emails in recent months and ordered staff to train AI to identify communications suggesting an employee is not "loyal" to Trump's political agenda, the two sources said. Reuters could not establish whether Grok was used for such surveillance. In the last few weeks, a group of roughly a dozen workers at a Department of Defense agency were told by a supervisor that an algorithmic tool was monitoring some of their computer activity, according to two additional people briefed on the conversations. Reuters also reviewed two separate text message exchanges by people who were directly involved in the conversations. The sources asked that the specific agency not be named out of concern over potential retribution. They were not aware of what tool was being used. Using AI to identify the personal political beliefs of employees could violate civil service laws aimed at shielding career civil servants from political interference, said Coglianese, the expert on federal regulations and ethics at the University of Pennsylvania. In a statement to Reuters, the Department of Defense said the department's DOGE team had not been involved in any network monitoring nor had DOGE been "directed" to use any AI tools, including Grok. "It's important to note that all government computers are inherently subject to monitoring as part of the standard user agreement," said Kingsley Wilson, a Pentagon spokesperson. The department did not respond to follow-up questions about whether any new monitoring systems had been deployed recently. By Marisa Taylor and Alexandra Ulmer, Reuters
[11]
Exclusive-Musk's DOGE expanding his Grok AI in U.S. government, raising conflict concerns
(Reuters) -Billionaire Elon Musk's DOGE team is expanding use of his artificial intelligence chatbot Grok in the U.S. federal government to analyze data, said three people familiar with the matter, potentially violating conflict-of-interest laws and putting at risk sensitive information on millions of Americans. Such use of Grok could reinforce concerns among privacy advocates and others that Musk's Department of Government Efficiency team appears to be casting aside long-established protections over the handling of sensitive data as President Donald Trump shakes up the U.S. bureaucracy. One of the three people familiar with the matter, who has knowledge of DOGE's activities, said Musk's team was using a customized version of the Grok chatbot. The apparent aim was for DOGE to sift through data more efficiently, this person said. "They ask questions, get it to prepare reports, give data analysis." The second and third person said DOGE staff also told Department of Homeland Security officials to use it even though Grok had not been approved within the department. Reuters could not determine the specific data that had been fed into the generative AI tool or how the custom system was set up. Grok was developed by xAI, a tech operation that Musk launched in 2023 on his social media platform, X. If the data was sensitive or confidential government information, the arrangement could violate security and privacy laws, said five specialists in technology and government ethics. It could also give the Tesla and SpaceX CEO access to valuable nonpublic federal contracting data at agencies he privately does business with or be used to help train Grok, a process in which AI models analyze troves of data, the experts said. Musk could also gain an unfair competitive advantage over other AI service providers from use of Grok in the federal government, they added. Musk, the White House and xAI did not respond to requests for comment. A Homeland Security spokesperson denied DOGE had pressed DHS staff to use Grok. "DOGE hasn't pushed any employees to use any particular tools or products," said the spokesperson, who did not respond to further questions. "DOGE is here to find and fight waste, fraud and abuse." Musk's xAI, an industry newcomer compared to rivals OpenAI and Anthropic, says on its website that it may monitor Grok users for "specific business purposes." "AI's knowledge should be all-encompassing and as far-reaching as possible," the website says. As part of Musk's stated push to eliminate government waste and inefficiency, the billionaire and his DOGE team have accessed heavily safeguarded federal databases that store personal information on millions of Americans. Experts said that data is typically off limits to all but a handful of officials because of the risk that it could be sold, lost, leaked, violate the privacy of Americans or expose the country to security threats. Typically, data sharing within the federal government requires agency authorization and the involvement of government specialists to ensure compliance with privacy, confidentiality and other laws. Analyzing sensitive federal data with Grok would mark an important shift in the work of DOGE, a team of software engineers and others connected to Musk. They have overseen the firing of thousands of federal workers, seized control of sensitive data systems and sought to dismantle agencies in the name of combating alleged waste, fraud and abuse. "Given the scale of data that DOGE has amassed and given the numerous concerns of porting that data into software like Grok, this to me is about as serious a privacy threat as you get," said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, a nonprofit that advocates for privacy. His concerns include the risk that government data will leak back to xAI, a private company, and a lack of clarity over who has access to this custom version of Grok. DOGE's access to federal information could give Grok and xAI an edge over other potential AI contractors looking to provide government services, said Cary Coglianese, an expert on federal regulations and ethics at the University of Pennsylvania. "The company has a financial interest in insisting that their product be used by federal employees," he said. "APPEARANCE OF SELF-DEALING" In addition to using Grok for its own analysis of government data, DOGE staff told DHS officials over the last two months to use Grok even though it had not been approved for use at the sprawling agency, said the second and third person. DHS oversees border security, immigration enforcement, cybersecurity and other sensitive national security functions. If federal employees are officially given access to Grok for such use, the federal government has to pay Musk's organization for access, the people said. "They were pushing it to be used across the department," said one of the people. Reuters could not independently establish if and how much the federal government would have been charged to use Grok. Reporters also couldn't determine if DHS workers followed the directive by DOGE staff to use Grok or ignored the request. DHS, under the previous Biden administration, created policies last year allowing its staff to use specific AI platforms, including OpenAI's ChatGPT, the Claude chatbot developed by Anthropic and another AI tool developed by Grammarly. DHS also created an internal DHS chatbot. The aim was to make DHS among the first federal agencies to embrace the technology and use generative AI, which can write research reports and carry out other complex tasks in response to prompts. Under the policy, staff could use the commercial bots for non-sensitive, non-confidential data, while DHS's internal bot could be fed more sensitive data, records posted on DHS's website show. In May, DHS officials abruptly shut down employee access to all commercial AI tools - including ChatGPT - after workers were suspected of improperly using them with sensitive data, said the second and third sources. Instead, staff can still use the internal DHS AI tool. Reuters could not determine whether this prevented DOGE from promoting Grok at DHS. DHS did not respond to questions about the matter. Musk, the world's richest person, told investors last month that he would reduce his time with DOGE to a day or two a week starting in May. As a special government employee, he can only serve for 130 days. It's unclear when that term ends. If he reduces his hours to part time, he could extend his term beyond May. He has said, however, that his DOGE team will continue with their work as he winds down his role at the White House. If Musk was directly involved in decisions to use Grok, it could violate a criminal conflict-of-interest statute which bars officials -- including special government employees -- from participating in matters that could benefit them financially, said Richard Painter, ethics counsel to former Republican President George W. Bush and a University of Minnesota professor. "This gives the appearance that DOGE is pressuring agencies to use software to enrich Musk and xAI, and not to the benefit of the American people," said Painter. The statute is rarely prosecuted but can result in fines or jail time. If DOGE staffers were pushing Grok's use without Musk's involvement, for instance to ingratiate themselves with the billionaire, that would be ethically problematic but not a violation of the conflict-of-interest statute, said Painter. "We can't prosecute it, but it would be the job of the White House to prevent it. It gives the appearance of self-dealing." The push to use Grok coincides with a larger DOGE effort led by two staffers on Musk's team, Kyle Schutt and Edward Coristine, to use AI in the federal bureaucracy, said two other people familiar with DOGE's operations. Coristine, a 19-year-old who has used the online moniker "Big Balls," is one of DOGE's highest-profile members. Schutt and Coristine did not respond to requests for comment. DOGE staffers have attempted to gain access to DHS employee emails in recent months and ordered staff to train AI to identify communications suggesting an employee is not "loyal" to Trump's political agenda, the two sources said. Reuters could not establish whether Grok was used for such surveillance. In the last few weeks, a group of roughly a dozen workers at a Department of Defense agency were told by a supervisor that an algorithmic tool was monitoring some of their computer activity, according to two additional people briefed on the conversations. Reuters also reviewed two separate text message exchanges by people who were directly involved in the conversations. The sources asked that the specific agency not be named out of concern over potential retribution. They were not aware of what tool was being used. Using AI to identify the personal political beliefs of employees could violate civil service laws aimed at shielding career civil servants from political interference, said Coglianese, the expert on federal regulations and ethics at the University of Pennsylvania. In a statement to Reuters, the Department of Defense said the department's DOGE team had not been involved in any network monitoring nor had DOGE been "directed" to use any AI tools, including Grok. "It's important to note that all government computers are inherently subject to monitoring as part of the standard user agreement," said Kingsley Wilson, a Pentagon spokesperson. The department did not respond to follow-up questions about whether any new monitoring systems had been deployed recently. (Additional reporting by Jeffrey Dastin and Alexandra Alper. Editing by Jason Szep)
Share
Copy Link
Elon Musk's Department of Government Efficiency (DOGE) team is under scrutiny for using AI models, including Meta's Llama 2 and Musk's own Grok, to analyze sensitive government data, raising concerns about privacy, security, and potential conflicts of interest.
The Department of Government Efficiency (DOGE), led by Elon Musk, has come under scrutiny for its use of artificial intelligence models to analyze sensitive government data. Initially, DOGE employed Meta's Llama 2 model to process responses from federal workers to the controversial "Fork in the Road" email 1. This email, reminiscent of Musk's approach at Twitter, offered employees a choice between accepting new work policies or resigning 2.
Source: Wired
While Llama 2 was used for the initial analysis, recent reports suggest that DOGE is expanding its use of Musk's own AI chatbot, Grok, developed by xAI 3. This shift raises concerns about potential conflicts of interest and the handling of sensitive government information.
Source: Economic Times
The use of AI models to analyze government data has alarmed privacy advocates and ethics experts. Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, described it as "about as serious a privacy threat as you get" 4. Concerns include the risk of data leakage to private companies and unclear access controls for these custom AI versions.
Experts warn that if sensitive or confidential government information is being fed into these AI tools, it could violate security and privacy laws 3. Additionally, the arrangement could give Musk access to valuable nonpublic federal contracting data, potentially providing an unfair advantage to his companies in government dealings 4.
The use of Grok in federal agencies could give xAI an edge over other AI service providers seeking government contracts. Cary Coglianese, an expert on federal regulations and ethics, noted that "The company has a financial interest in insisting that their product be used by federal employees" 4.
While a Homeland Security spokesperson denied that DOGE had pressured employees to use specific tools, reports suggest that DOGE staff have encouraged the use of Grok even in departments where it hasn't been officially approved 35.
This controversy highlights the challenges of integrating AI into government operations. The Department of Homeland Security had previously created policies allowing the use of specific AI platforms like ChatGPT and Claude for non-sensitive data 3. However, DOGE's approach appears to bypass established protocols for data handling and AI deployment in government settings.
Source: Reuters
Since helping Trump get re-elected, Musk's influence on government operations has grown significantly. DOGE's activities, including mass firings and control over sensitive data systems, have been controversial. Critics argue that the organization has failed to achieve its stated goal of eliminating government waste and inefficiency 5.
As this situation unfolds, it raises critical questions about the balance between technological innovation in government and the protection of sensitive data and ethical standards. The use of AI in government operations, particularly when tied to influential private sector figures like Musk, will likely continue to be a subject of intense debate and scrutiny.
Summarized by
Navi
[3]
SpaceX is reportedly set to invest $2 billion in xAI, Elon Musk's AI startup, as part of a $5 billion equity round. This move strengthens the connections between Musk's various companies and positions xAI to compete with rivals like OpenAI.
5 Sources
Business and Economy
18 hrs ago
5 Sources
Business and Economy
18 hrs ago
Meta has acquired PlayAI, a startup specializing in AI-generated human-sounding voices, to enhance its AI offerings across various platforms and products.
2 Sources
Technology
2 hrs ago
2 Sources
Technology
2 hrs ago
Elon Musk's xAI introduces Grok 4, boasting improved capabilities but facing criticism over biased responses and a steep price tag.
2 Sources
Technology
10 hrs ago
2 Sources
Technology
10 hrs ago
Samsung's new Chief Design Officer, Mauro Porcini, outlines the company's vision for AI-integrated, human-centered design across its product ecosystem, aiming to create emotionally intelligent and empathetic technology.
2 Sources
Technology
2 days ago
2 Sources
Technology
2 days ago
China's top military newspaper, PLA Daily, published an article warning against the use of humanoid robots in warfare, contradicting President Xi Jinping's military technology ambitions and sparking debate on the ethical implications of AI in combat.
2 Sources
Technology
2 days ago
2 Sources
Technology
2 days ago