2 Sources
[1]
The United States and China will start discussing A.I. safety, Bessent says.
The United States and China will discuss guardrails on artificial intelligence, including establishing a protocol for keeping powerful A.I. models out of the hands of nonstate actors, Treasury Secretary Scott Bessent said on Thursday. Mr. Bessent, who was speaking from Beijing in an interview with CNBC, did not give more details, including when these discussions would take place. But Xi Jinping, China's leader, and President Trump had been expected to discuss A.I. during their summit in the Chinese capital. If these talks happen, it would be the first time the two countries formally take up the issue during Mr. Trump's second term. The capabilities and usage of A.I. have grown rapidly, and so have concerns that this technology could be weaponized by hackers and terrorists, or spiral out of human control. "The two A.I. superpowers are going to start talking," Mr. Bessent said. "We're going to set up a protocol in terms of, how do we go forward with best practices for A.I. to make sure nonstate actors don't get ahold of these models." Still, Mr. Bessent made clear that the fierce competition between the United States and China for supremacy in A.I. -- which has been a major hurdle to cooperation on safety -- remained front of mind for U.S. policymakers. Officials and experts in both countries have argued that they cannot slow technological development and risk losing out to their rivals. Mr. Bessent said that the United States was willing to cooperate with China on A.I. safety because "the Chinese are substantially behind us" in terms of the technology's development. "I do not think we would be having the same discussions if they were this far ahead of us. So we're going to put in U.S. best practices, U.S. values, on this, and then roll those out to the world," Mr. Bessent said. Experts have suggested that China's A.I. models may be a few months behind the leading U.S. models. Another hurdle to the United States and China working together on A.I. safety is that they have generally focused on different potential threats. American experts have generally highlighted existential risks, such as the possibility of artificial general intelligence, or super-intelligence that exceeds that of humans. Chinese researchers and officials have more often highlighted risks related to social stability and information control, such as the possibility of chatbots producing content that challenges China's leadership and policies. Still, researchers in both countries have highlighted some shared risks, such as the possibility of A.I. being used to develop new biological weapons.
[2]
US, China are discussing AI guardrails to safeguard most powerful models, Bessent says
WASHINGTON, May 14 (Reuters) - U.S. and Chinese delegations will discuss artificial intelligence guardrails at their Beijing summit and will set up a protocol for best practices to keep non-state actors from getting the most powerful AI models. Bessent told CNBC in a pre-recorded interview on Thursday that it was "of utmost importance" that the U.S. maintain its lead over China in AI, adding that this is why Beijing is interested in discussing guardrails. "What we don't want to do is stifle innovation. So our responsibility is to come up with the highest performance calculus where we can get the most innovation and the highest level of safety," Bessent said. (Reporting by David Lawder; editing by Susan Heavey)
Share
Copy Link
The United States and China will begin formal talks on AI safety, marking the first such discussions during President Trump's second term. Treasury Secretary Scott Bessent announced plans to establish protocols preventing non-state actors from accessing powerful AI models, though fierce competition for AI supremacy remains a central concern for both nations.
The United States and China will start formal discussions on AI safety, Treasury Secretary Scott Bessent announced from Beijing on Thursday. Speaking in an interview with CNBC, Bessent revealed that the two nations plan to establish protocols aimed at keeping powerful AI models away from non-state actors such as hackers and terrorists
1
. These talks represent the first time the two countries will formally address artificial intelligence during President Trump's second term, as concerns grow about the technology's rapid advancement and potential weaponization1
."The two A.I. superpowers are going to start talking," Bessent said, emphasizing the need to prevent non-state actors from accessing powerful AI models through established best practices
1
. The discussions were expected to take place during the summit between Xi Jinping and President Trump in the Chinese capital, though specific timing was not disclosed1
.The push for AI guardrails comes as both nations race for AI supremacy, a competition that has historically hindered cooperation on safety measures. Bessent made clear that the United States views its technological lead as crucial leverage in these negotiations. "The Chinese are substantially behind us" in AI development, he stated, adding that Washington would not be pursuing these discussions if the situation were reversed
1
. Experts suggest China's AI models may lag behind leading U.S. models by just a few months1
.Bessent told CNBC it was "of utmost importance" that the U.S. maintain its lead over China in AI, noting this competitive dynamic is precisely why Beijing has shown interest in discussing guardrails
2
. The Treasury Secretary emphasized that U.S. best practices and values would shape the framework, which would then be rolled out globally1
.A significant challenge in US and China discussions on AI safety stems from fundamentally different threat assessments. American experts typically focus on existential risks, including the possibility of artificial general intelligence surpassing human capabilities and spiraling beyond human control
1
. Chinese researchers and officials, meanwhile, prioritize concerns related to social stability and information control, such as chatbots generating content that challenges China's leadership and policies1
.Despite these differences, both nations recognize shared threats. Researchers in both countries have highlighted concerns about AI being weaponized to develop new biological weapons, representing common ground for cooperation
1
. The ability to safeguard most powerful models from malicious actors addresses security concerns that transcend ideological boundaries.Related Stories
Bessent emphasized that any protocols established must balance innovation with safety. "What we don't want to do is stifle innovation. So our responsibility is to come up with the highest performance calculus where we can get the most innovation and the highest level of safety," he stated
2
. This concern reflects a broader tension in both countries, where officials and experts argue they cannot afford to slow technological development and risk losing ground to their rivals1
.The coming discussions will test whether the world's two largest economies can cooperate on AI safety while maintaining their competitive edge. As capabilities and usage of AI have grown rapidly, so have concerns about the technology falling into the wrong hands or exceeding human control
1
. Whether these talks yield concrete safeguards or remain aspirational will shape the global AI landscape for years to come.Summarized by
Navi
07 May 2026•Policy and Regulation

06 May 2026•Policy and Regulation

26 Jul 2025•Policy and Regulation

1
Technology

2
Technology

3
Policy and Regulation
