Elon Musk's xAI Selectively Signs EU's AI Code of Practice, Highlighting Industry Divide

2 Sources

Share

Elon Musk's xAI agrees to sign the safety and security chapter of the EU's AI code of practice, while expressing concerns over other aspects of the regulation. This move underscores the ongoing debate in the AI industry about balancing innovation with regulation.

xAI's Selective Endorsement of EU AI Code

Elon Musk's artificial intelligence company, xAI, has announced its intention to sign the safety and security chapter of the European Union's AI code of practice. This voluntary code, developed by 13 independent experts, aims to guide companies in complying with the EU's landmark AI regulations

1

. The move marks a significant step in the ongoing dialogue between AI companies and regulatory bodies.

Source: Analytics Insight

Source: Analytics Insight

The EU AI Code of Practice

The EU's code comprises three main chapters: transparency, copyright, and safety and security. While the guidance on transparency and copyright applies to all general-purpose AI providers, the safety and security chapter specifically targets providers of more advanced AI models

1

.

xAI's Stance on the Code

In a statement posted on X (formerly Twitter), xAI expressed its support for AI safety and confirmed its intention to sign the safety and security chapter. However, the company also voiced concerns about other aspects of the code, stating, "While the AI Act and the Code have a portion that promotes AI safety, its other parts contain requirements that are profoundly detrimental to innovation and its copyright provisions are clearly (an) over-reach"

1

.

Industry Responses to the EU Code

xAI's selective endorsement of the EU code highlights the diverse approaches taken by major tech companies in response to AI regulation:

  1. Google (Alphabet) has previously stated its intention to sign the code of practice

    1

    .

  2. Microsoft's President, Brad Smith, has indicated that the company would likely sign the code

    1

    .

  3. Meta (Facebook) has declined to sign the code, citing concerns about legal uncertainties for model developers and measures that exceed the scope of the AI Act

    1

    .

Implications of Signing the Code

Companies that choose to sign the code stand to benefit from increased legal certainty. The voluntary nature of the code allows companies to align themselves with EU regulations while potentially influencing the development of future AI policies

2

.

Source: Reuters

Source: Reuters

Balancing Innovation and Regulation

xAI's decision to sign only the safety and security chapter while criticizing other aspects of the code underscores the ongoing challenge of balancing innovation with regulation in the rapidly evolving field of AI. This selective approach may set a precedent for how AI companies engage with regulatory frameworks, potentially leading to more nuanced discussions about the impact of regulations on AI development and deployment.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo