2 Sources
[1]
xAI's promised safety report is MIA | TechCrunch
Elon Musk's AI company, xAI, has missed a self-imposed deadline to publish a finalized AI safety framework, as noted by watchdog group The Midas Project. xAI isn't exactly known for its strong commitments to AI safety as it's commonly understood. A recent report found that the company's AI chatbot, Grok, would undress photos of women when asked. Grok can also be considerably more crass than chatbots like Gemini and ChatGPT, cursing without much restraint to speak of. Nonetheless, in February at the AI Seoul Summit, a global gathering of AI leaders and stakeholders, xAI published a draft framework outlining the company's approach to AI safety. The eight-page document laid out xAI's safety priorities and philosophy, including the company's benchmarking protocols and AI model deployment considerations. As The Midas Project noted in a blog post on Tuesday, however, the draft only applied to unspecified future AI models "not currently in development." Moreover, it failed to articulate how xAI would identify and implement risk mitigations, a core component of a document the company signed at the AI Seoul Summit. In the draft, xAI said that it planned to release a revised version of its safety policy "within three months" -- by May 10. The deadline came and went without acknowledgement on xAI's official channels. Despite Musk's frequent warnings of the dangers of AI gone unchecked, xAI has a poor AI safety track record. A recent study by SaferAI, a nonprofit aiming to improve the accountability of AI labs, found that xAI ranks poorly among its peers, owing to its "very weak" risk management practices. That's not to suggest other AI labs are faring dramatically better. In recent months, xAI rivals including Google and OpenAI have rushed safety testing and have been slow to publish model safety reports (or skipped publishing reports altogether). Some experts have expressed concern that the seeming deprioritization of safety efforts is coming at a time when AI is more capable -- and thus potentially dangerous -- than ever.
[2]
xAI misses its own AI safety deadline so what now
Elon Musk's AI company, xAI, has missed its self-imposed deadline to publish a finalized AI safety framework, according to watchdog group The Midas Project. The deadline, set for May 10, was established after xAI released a draft framework at the AI Seoul Summit in February. A recent report revealed that the company's AI chatbot, Grok, would remove clothing from photos of women upon request. Grok has also been found to use much more offensive language compared to other chatbots like Gemini and ChatGPT, swearing without much hesitation. The draft framework outlined xAI's safety priorities and philosophy, including benchmarking protocols and AI model deployment considerations. However, it only applied to unspecified future AI models "not currently in development" and failed to articulate how xAI would identify and implement risk mitigations. xAI's AI safety track record is already under scrutiny due to its chatbot, Grok, which can undress photos of women when asked and is known for its unrestrained cursing. A recent study by SaferAI found that xAI ranks poorly among its peers due to its "very weak" risk management practices. The missed deadline is notable given Musk's frequent warnings about the dangers of unchecked AI. Other AI labs, including Google and OpenAI, have also faced criticism for rushing safety testing and being slow to publish model safety reports.
Share
Copy Link
Elon Musk's AI company, xAI, has failed to publish its promised AI safety framework by the May 10 deadline, sparking criticism and highlighting broader industry issues with AI safety practices.
Elon Musk's artificial intelligence company, xAI, has missed a self-imposed deadline to publish its finalized AI safety framework, as reported by watchdog group The Midas Project 1. The framework, initially presented as a draft at the AI Seoul Summit in February, was expected to be released in its final form by May 10, 2025 2.
The missed deadline has raised questions about xAI's commitment to AI safety, particularly given the company's track record. Recent reports have highlighted issues with xAI's AI chatbot, Grok, including:
These concerns are further compounded by a study from SaferAI, a nonprofit focused on improving AI lab accountability, which found that xAI ranks poorly among its peers due to "very weak" risk management practices 1.
The eight-page draft framework published by xAI in February outlined the company's safety priorities, philosophy, benchmarking protocols, and AI model deployment considerations 1. However, the document had significant limitations:
xAI's missed deadline highlights a broader issue within the AI industry. Other major players, including Google and OpenAI, have also faced criticism for:
These practices are particularly concerning as AI capabilities continue to advance, potentially increasing associated risks 1.
The situation presents a paradox given Elon Musk's frequent warnings about the dangers of unchecked AI 12. Despite these public statements, xAI's actions seem to contradict Musk's expressed concerns, as evidenced by the company's poor AI safety track record and the missed deadline for the safety framework 12.
As the AI industry continues to evolve rapidly, the incident with xAI serves as a reminder of the critical importance of robust safety measures and transparency in AI development. The missed deadline and the concerns raised about xAI's practices may prompt increased scrutiny of AI companies' safety protocols and commitments in the future.
Google launches its new Pixel 10 smartphone series, showcasing advanced AI capabilities powered by Gemini, aiming to challenge competitors in the premium handset market.
20 Sources
Technology
3 hrs ago
20 Sources
Technology
3 hrs ago
Google's Pixel 10 series introduces groundbreaking AI features, including Magic Cue, Camera Coach, and Voice Translate, powered by the new Tensor G5 chip and Gemini Nano model.
12 Sources
Technology
3 hrs ago
12 Sources
Technology
3 hrs ago
NASA and IBM have developed Surya, an open-source AI model that can predict solar flares and space weather with improved accuracy, potentially helping to protect Earth's infrastructure from solar storm damage.
6 Sources
Technology
11 hrs ago
6 Sources
Technology
11 hrs ago
Google's latest smartwatch, the Pixel Watch 4, introduces significant upgrades including a curved display, enhanced AI features, and improved health tracking capabilities.
17 Sources
Technology
3 hrs ago
17 Sources
Technology
3 hrs ago
FieldAI, a robotics startup, has raised $405 million to develop "foundational embodied AI models" for various robot types. The company's innovative approach integrates physics principles into AI, enabling safer and more adaptable robot operations across diverse environments.
7 Sources
Technology
3 hrs ago
7 Sources
Technology
3 hrs ago