2 Sources
[1]
Musk's xAI to sign chapter on safety and security in EU's AI code of practice
July 31 (Reuters) - Elon Musk's xAI on Thursday said it will sign a chapter on safety and security from the European Union's code of practice, which aims to help companies comply with the bloc's landmark artificial intelligence rules. Signing up to the code, which was drawn up by 13 independent experts, is voluntary, and companies that decline to do so will not benefit from the legal certainty provided to a signatory. The EU's code has three chapters - transparency, copyright and safety and security. While the guidance on transparency and copyright will apply to all general-purpose AI providers, the chapters on safety and security target providers of the most advanced models. "xAI supports AI safety and will be signing the EU AI Act's Code of Practice Chapter on Safety and Security. While the AI Act and the Code have a portion that promotes AI safety, its other parts contain requirements that are profoundly detrimental to innovation and its copyright provisions are clearly (an) over-reach," xAI said in a post on X. The company did not respond to a request outside regular business hours for comment on whether it plans to sign the other two chapters of the code. Alphabet's (GOOGL.O), opens new tab Google has previously said it would sign the code of practice, while Microsoft's (MSFT.O), opens new tab President Brad Smith has said that the company would likely sign it. Facebook-owner Meta (META.O), opens new tab has said it will not be signing the code, saying that it introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act. Reporting by Chandni Shah in Bengaluru; Editing by Mrigank Dhaniwala Our Standards: The Thomson Reuters Trust Principles., opens new tab
[2]
Elon Musk's xAI Signs EU's AI Code of Practice, But There's a Catch
xAI Embraces AI Regulation on Safety, Challenges Copyright and Transparency Rules Elon Musk's startup, xAI, has confirmed the signing of the Safety and Security Chapter of the European Union AI Code of Practice, aligning a segment of the company with the EU's evolving artificial intelligence policies. However, the company refrains from endorsing other parts of the framework. The Code of Practice is a non-binding guide for artificial intelligence regulation in Europe that broadly advocates for the three pillars of transparency, copyright, and safety. Developers of general-purpose artificial intelligence should adopt the entire code. Still, only developers of more advanced systems are urged to commit to the safety-specific chapter.
Share
Copy Link
Elon Musk's xAI agrees to sign the safety and security chapter of the EU's AI code of practice, while expressing concerns over other aspects of the regulation. This move underscores the ongoing debate in the AI industry about balancing innovation with regulation.
Elon Musk's artificial intelligence company, xAI, has announced its intention to sign the safety and security chapter of the European Union's AI code of practice. This voluntary code, developed by 13 independent experts, aims to guide companies in complying with the EU's landmark AI regulations 1. The move marks a significant step in the ongoing dialogue between AI companies and regulatory bodies.
Source: Analytics Insight
The EU's code comprises three main chapters: transparency, copyright, and safety and security. While the guidance on transparency and copyright applies to all general-purpose AI providers, the safety and security chapter specifically targets providers of more advanced AI models 1.
In a statement posted on X (formerly Twitter), xAI expressed its support for AI safety and confirmed its intention to sign the safety and security chapter. However, the company also voiced concerns about other aspects of the code, stating, "While the AI Act and the Code have a portion that promotes AI safety, its other parts contain requirements that are profoundly detrimental to innovation and its copyright provisions are clearly (an) over-reach" 1.
xAI's selective endorsement of the EU code highlights the diverse approaches taken by major tech companies in response to AI regulation:
Google (Alphabet) has previously stated its intention to sign the code of practice 1.
Microsoft's President, Brad Smith, has indicated that the company would likely sign the code 1.
Meta (Facebook) has declined to sign the code, citing concerns about legal uncertainties for model developers and measures that exceed the scope of the AI Act 1.
Companies that choose to sign the code stand to benefit from increased legal certainty. The voluntary nature of the code allows companies to align themselves with EU regulations while potentially influencing the development of future AI policies 2.
Source: Reuters
xAI's decision to sign only the safety and security chapter while criticizing other aspects of the code underscores the ongoing challenge of balancing innovation with regulation in the rapidly evolving field of AI. This selective approach may set a precedent for how AI companies engage with regulatory frameworks, potentially leading to more nuanced discussions about the impact of regulations on AI development and deployment.
Summarized by
Navi
[2]
As nations compete for dominance in space, the risk of satellite hijacking and space-based weapons escalates, transforming outer space into a potential battlefield with far-reaching consequences for global security and economy.
7 Sources
Technology
14 hrs ago
7 Sources
Technology
14 hrs ago
Anthropic has updated its Claude Opus 4 and 4.1 AI models with the ability to terminate conversations in extreme cases of persistent harm or abuse, as part of its AI welfare research.
6 Sources
Technology
22 hrs ago
6 Sources
Technology
22 hrs ago
A pro-Russian propaganda group, Storm-1679, is using AI-generated content and impersonating legitimate news outlets to spread disinformation, raising concerns about the growing threat of AI-powered fake news.
2 Sources
Technology
14 hrs ago
2 Sources
Technology
14 hrs ago
OpenAI has made subtle changes to GPT-5's personality, aiming to make it more approachable after users complained about its formal tone. The company is also working on allowing greater customization of ChatGPT's style.
4 Sources
Technology
6 hrs ago
4 Sources
Technology
6 hrs ago
SoftBank has purchased Foxconn's Ohio plant for $375 million to produce AI servers for the Stargate project. Foxconn will continue to operate the facility, which will be retrofitted for AI server production.
5 Sources
Technology
6 hrs ago
5 Sources
Technology
6 hrs ago