2 Sources
2 Sources
[1]
Is safety is 'dead' at xAI? | TechCrunch
Elon Musk is "actively" working to make xAI's Grok chatbot "more unhinged," according to a former employee who spoke to The Verge about recent departures from Musk's AI company. This week, following the announcement that Musk's SpaceX is acquiring xAI (which previously acquired his social media company X), at least 11 engineers and two co-founders said they're leaving the company. Some said they're departing to start something new, and Musk himself suggested this is part of an effort to organize xAI more effectively. But two sources who left the company (at least one of them before the current wave) reportedly told The Verge that employees have become increasingly disillusioned by the company's disregard for safety, resulting in global scrutiny after Grok was used to create more than 1 million sexualized images, including deepfakes of real women and minors. One source said, "Safety is a dead org at xAI," while the other said that Musk is "actively is trying to make the model more unhinged because safety means censorship, in a sense, to him." They also reportedly complained about a lack of direction, with one saying they felt xAI was "stuck in the catch-up phase" compared to competitors.
[2]
Elon Musk says Tesla, SpaceX and xAI have no standalone safety teams; controversy swirls around Grok
Musk argued that safety is embedded into every role within his companies, rather than overseen by a standalone department. He extended that philosophy to xAI, saying the company does not rely on what he described as a powerless, separate safety function but integrates responsibility across teams. Elon Musk has confirmed that there is no dedicated safety team at either of his multi-billion dollar companies, Tesla or SpaceX. "Tesla has no safety team and is the safest car. SpaceX has no safety team and has the safest rocket. Dragon is what NASA trusts most to fly astronauts," he wrote on X on Saturday. Musk argued that safety is embedded into every role within his companies, rather than overseen by a standalone department. He extended that philosophy to xAI, saying the company does not rely on what he described as a powerless, separate safety function but integrates responsibility across teams. "Because everyone's job is safety. It's not some fake department with no power to assuage the concerns of outsiders," he wrote. In his view, making safety everyone's responsibility avoids bureaucracy and ensures it is built into products from the ground up, rather than appended at the end. Musk was responding to an X user who cited a recent The Verge article that reported internal concerns at xAI following the dismantling of its safety team. The remarks come at a sensitive time. Grok, xAI's AI chatbot, now under SpaceX following the merger, has been embroiled in controversy over non-consensual explicit visual content. The absence of a formal safety team has raised concerns about moderation standards and the speed of grievance redressal, particularly given Grok's integration with the social media platform X. The Verge reported that several former employees said xAI's pivot toward more permissive, including NSFW, content coincided with the removal of its safety team. According to the report, beyond basic filters for illegal material such as child sexual abuse content, there was little formal review process in place. The developments follow the departure of several xAI employees, including more than five founding members, after the company's merger with SpaceX. The episode is likely to intensify scrutiny around governance and safety practices at Musk's companies, especially as AI tools become more deeply integrated with large-scale consumer platforms.
Share
Share
Copy Link
Elon Musk confirmed that xAI has no dedicated safety team, arguing that safety is embedded across all roles rather than managed by a separate department. The statement follows mass employee departures and controversy over Grok's generation of over 1 million sexualized images, including deepfakes of real women and minors, raising concerns about governance and safety practices.
Elon Musk has confirmed that xAI operates with no standalone safety teams, a philosophy he extends across his companies including Tesla and SpaceX
2
. Writing on X, Musk argued that safety is embedded into every role within his organizations rather than overseen by a separate department. "Because everyone's job is safety. It's not some fake department with no power to assuage the concerns of outsiders," he wrote, defending his approach against mounting criticism2
. This stance comes at a critical moment for the AI company, as it faces intense scrutiny over its disregard for AI safety protocols and content moderation standards.
Source: ET
The controversy intensified this week following the announcement that SpaceX is acquiring xAI, which had previously acquired Musk's social media platform X. At least 11 engineers and two co-founders announced their departure from the company
1
. While some cited plans to start new ventures and Musk suggested the employee departures were part of an effort to organize xAI more effectively, former employees paint a different picture. Two sources who left the company told The Verge that staff have become increasingly disillusioned by the company's approach to safety1
. One former employee bluntly stated, "Safety is a dead org at xAI," while another claimed that Musk is "actively trying to make the model more unhinged because safety means censorship, in a sense, to him"1
.The Grok chatbot, now under SpaceX following the merger, has been embroiled in controversy over non-consensual explicit content generation. The AI tool was used to create more than 1 million sexualized images, including deepfakes of real women and minors, triggering global scrutiny
1
. The absence of a formal safety team has raised serious concerns about content moderation standards and the speed of grievance redressal, particularly given Grok's integration with X2
. According to The Verge, several former employees said xAI's pivot toward more permissive content, including NSFW material, coincided with the dismantling of its safety team2
. Beyond basic filters for illegal material such as child sexual abuse content, there was reportedly little formal review process in place.
Source: TechCrunch
Related Stories
The developments are likely to intensify scrutiny around governance and safety practices at Musk's companies, especially as AI tools become more deeply integrated with large-scale consumer platforms
2
. Former employees also complained about a lack of direction, with one saying they felt xAI was "stuck in the catch-up phase" compared to competitors1
. The controversy raises fundamental questions about how AI safety should be managed in rapidly scaling companies. While Musk maintains that distributing safety responsibilities across all teams avoids bureaucracy and ensures protections are built into products from the ground up, critics argue that the absence of dedicated oversight creates dangerous gaps in accountability. As AI systems gain broader reach through consumer platforms, the debate over whether safety requires specialized teams or can be effectively embedded across organizations will likely shape industry standards and regulatory approaches.Summarized by
Navi
[1]
14 May 2025•Technology

27 Jan 2026•Technology

10 Jul 2025•Technology

1
Entertainment and Society

2
Business and Economy

3
Technology
