US China skip AI declaration as 35 nations commit to governing military AI use in warfare

Reviewed byNidhi Govil

3 Sources

Share

At a military AI summit in Spain, only 35 of 85 attending nations signed a declaration on governing AI use in warfare, with the US and China notably absent. The non-binding agreement outlines 20 principles including human oversight over AI-powered weapons, but global tensions and a strategic prisoner's dilemma kept major powers from committing.

Military Heavyweights Decline AI Declaration at Spain Summit

Only 35 countries out of 85 attending the Responsible AI in the Military Domain (REAIM) summit in A Coruna, Spain signed an AI declaration on governing AI use in warfare, with military heavyweights US China conspicuously opting out. The non-binding declaration marks a significant moment in efforts to establish regulatory frameworks for military AI, yet the absence of the world's leading military powers raises questions about the effectiveness of international agreements in this rapidly evolving domain.

Source: Reuters

Source: Reuters

Tensions in relations between the United States and European allies, coupled with uncertainty over transatlantic ties in coming months and years, made some countries hesitant to sign joint agreements, several attendees and delegates revealed

2

. This reluctance reflects deeper global tensions that complicate efforts to establish unified approaches to military technology governance.

The Prisoner's Dilemma Facing Nations

Governments face a classic prisoner's dilemma when it comes to AI use in military applications, caught between implementing responsible restrictions and avoiding self-imposed limitations that adversaries might exploit. Dutch Defence Minister Ruben Brekelmans captured this tension succinctly: "Russia and China are moving very fast. That creates urgency to make progress in developing AI. But seeing it going fast also increases the urgency to keep working on its responsible use. The two go hand-in-hand".

The pledge underscores growing concern that rapid advances in artificial intelligence could outpace rules around its military use, raising the risk of accidents or escalation, miscalculation, and unintended conflict

3

. This represents a fundamental challenge for nations seeking to maintain strategic advantages while promoting stability.

What the 20 Principles Cover

The declaration commits signatories to 20 principles addressing key concerns around autonomous weapons and human oversight. These included affirming human responsibility over AI-powered weapons, encouraging clear chains of command and control, and sharing information on national oversight arrangements "where consistent with national security"

2

. The document also outlined the importance of risk assessments and testing, along with training and education for personnel operating military AI capabilities.

Major signatories included Canada, Germany, France, Britain, the Netherlands, South Korea and Ukraine. The participation of Ukraine, currently engaged in active conflict where military technology plays a critical role, signals the practical urgency some nations feel about establishing weapons control guidelines.

A Step Back from Previous Summits

At two prior military AI summits in The Hague and Seoul in 2023 and 2024 respectively, around 60 nations—excluding China but including the United States—endorsed a modest "blueprint for action" without legal commitment. The reduction from 60 to 35 signatories, combined with US withdrawal, suggests growing divergence in how nations approach military AI governance.

While this year's document was also non-binding, some were still uncomfortable endorsing more concrete policies, said Yasmin Afina, a researcher at the U.N. Institute for Disarmament Research and an adviser on the process

2

. This discomfort points to the delicate balance nations must strike between transparency and strategic advantage in an increasingly competitive military technology landscape. The absence of binding commitments means that even signatories retain flexibility in how they interpret and implement these principles, raising questions about whether voluntary frameworks can effectively govern technologies with such profound security implications.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo