US and China to discuss AI safety guardrails as two superpowers seek to prevent misuse

2 Sources

Share

The United States and China will begin formal talks on AI safety, marking the first such discussions during President Trump's second term. Treasury Secretary Scott Bessent announced plans to establish protocols preventing non-state actors from accessing powerful AI models, though fierce competition for AI supremacy remains a central concern for both nations.

US China AI Discussions Mark New Chapter in Tech Diplomacy

The United States and China will start formal discussions on AI safety, Treasury Secretary Scott Bessent announced from Beijing on Thursday. Speaking in an interview with CNBC, Bessent revealed that the two nations plan to establish protocols aimed at keeping powerful AI models away from non-state actors such as hackers and terrorists

1

. These talks represent the first time the two countries will formally address artificial intelligence during President Trump's second term, as concerns grow about the technology's rapid advancement and potential weaponization

1

.

"The two A.I. superpowers are going to start talking," Bessent said, emphasizing the need to prevent non-state actors from accessing powerful AI models through established best practices

1

. The discussions were expected to take place during the summit between Xi Jinping and President Trump in the Chinese capital, though specific timing was not disclosed

1

.

AI Guardrails Proposed Amid Fierce Competition

The push for AI guardrails comes as both nations race for AI supremacy, a competition that has historically hindered cooperation on safety measures. Bessent made clear that the United States views its technological lead as crucial leverage in these negotiations. "The Chinese are substantially behind us" in AI development, he stated, adding that Washington would not be pursuing these discussions if the situation were reversed

1

. Experts suggest China's AI models may lag behind leading U.S. models by just a few months

1

.

Bessent told CNBC it was "of utmost importance" that the U.S. maintain its lead over China in AI, noting this competitive dynamic is precisely why Beijing has shown interest in discussing guardrails

2

. The Treasury Secretary emphasized that U.S. best practices and values would shape the framework, which would then be rolled out globally

1

.

Divergent AI Risk Perceptions Complicate Cooperation

A significant challenge in US and China discussions on AI safety stems from fundamentally different threat assessments. American experts typically focus on existential risks, including the possibility of artificial general intelligence surpassing human capabilities and spiraling beyond human control

1

. Chinese researchers and officials, meanwhile, prioritize concerns related to social stability and information control, such as chatbots generating content that challenges China's leadership and policies

1

.

Despite these differences, both nations recognize shared threats. Researchers in both countries have highlighted concerns about AI being weaponized to develop new biological weapons, representing common ground for cooperation

1

. The ability to safeguard most powerful models from malicious actors addresses security concerns that transcend ideological boundaries.

Balance Innovation With Safety as Primary Goal

Bessent emphasized that any protocols established must balance innovation with safety. "What we don't want to do is stifle innovation. So our responsibility is to come up with the highest performance calculus where we can get the most innovation and the highest level of safety," he stated

2

. This concern reflects a broader tension in both countries, where officials and experts argue they cannot afford to slow technological development and risk losing ground to their rivals

1

.

The coming discussions will test whether the world's two largest economies can cooperate on AI safety while maintaining their competitive edge. As capabilities and usage of AI have grown rapidly, so have concerns about the technology falling into the wrong hands or exceeding human control

1

. Whether these talks yield concrete safeguards or remain aspirational will shape the global AI landscape for years to come.

Today's Top Stories

TheOutpost.ai

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Instagram logo
LinkedIn logo
Youtube logo
© 2026 TheOutpost.AI All rights reserved