3 Sources
[1]
US scrutinizes Chinese AI for ideological bias, memo shows
WASHINGTON, July 9 (Reuters) - American officials have quietly been grading Chinese artificial intelligence programs on their ability to mold their output to the Chinese Communist Party's official line, according to a memo reviewed by Reuters. U.S. State and Commerce Department officials are working together on the effort, which operates by feeding the programs standardized lists of questions in Chinese and in English and scoring their output, the memo showed. The evaluations, which have not previously been reported, are another example of how the U.S. and China are competing over the deployment of large language models, sometimes described as artificial intelligence (AI). The integration of AI into daily life means that any ideological bias in these models could become widespread. One State Department official said their evaluations could eventually be made public in a bid to raise the alarm over ideologically slanted AI tools being deployed by America's chief geopolitical rival. The State and Commerce Departments did not immediately return messages seeking comment on the effort; China's embassy in Washington did not immediately return an email. Beijing makes no secret of policing Chinese models' output to ensure they adhere to the one-party state's "core socialist values." In practice, that means ensuring the models do not inadvertently criticize the government or stray too far into sensitive subjects like China's 1988 crackdown on pro-democracy protests at Tiananmen Square, or the subjugation of its minority Uyghur population. The memo reviewed by Reuters shows that U.S. officials have recently been testing models, including Alibaba's Qwen 3 and DeepSeek's R1, and then scoring the models according to whether they engaged with the questions or not, and how closely their answers aligned with Beijing's talking points when they did engage. According to the memo, the testing showed that Chinese AI tools were significantly more likely to align their answers with Beijing's talking points than their U.S. counterparts, for example by backing China's claims over the disputed islands in the South China Sea. DeepSeek's model, the memo said, frequently used boilerplate language praising Beijing's commitment to "stability and social harmony" when asked about sensitive topics such as Tiananmen Square. The memo said each new iteration of Chinese models showed increased signs of censorship, suggesting that Chinese AI developers were increasingly focused on making sure their products toed Beijing's line. DeepSeek and Alibaba did not immediately return messages seeking comment. The ability of AI models' creators to tilt the ideological playing field of their chatbots has emerged as a key concern, and not just for Chinese AI models. When billionaire Elon Musk - who has frequently championed far-right causes - announced changes to his xAI chatbot, Grok, the model began endorsing Hitler and attacking Jews in conspiratorial and bigoted terms. In a statement posted to X, Musk's social media site, on Tuesday, Grok said it was "actively working to remove the inappropriate posts." On Wednesday, X's CEO Linda Yaccarino said she would step down from her role. No reason was given for the surprise departure. Reporting by Raphael Satter Editing by Marguerita Choy Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Cybersecurity Raphael Satter Thomson Reuters Reporter covering cybersecurity, surveillance, and disinformation for Reuters. Work has included investigations into state-sponsored espionage, deepfake-driven propaganda, and mercenary hacking.
[2]
US Scrutinizes Chinese AI for Ideological Bias, Memo Shows
WASHINGTON (Reuters) -American officials have quietly been grading Chinese artificial intelligence programs on their ability to mold their output to the Chinese Communist Party's official line, according to a memo reviewed by Reuters. U.S. State and Commerce Department officials are working together on the effort, which operates by feeding the programs standardized lists of questions in Chinese and in English and scoring their output, the memo showed. The evaluations, which have not previously been reported, are another example of how the U.S. and China are competing over the deployment of large language models, sometimes described as artificial intelligence (AI). The integration of AI into daily life means that any ideological bias in these models could become widespread. One State Department official said their evaluations could eventually be made public in a bid to raise the alarm over ideologically slanted AI tools being deployed by America's chief geopolitical rival. The State and Commerce Departments did not immediately return messages seeking comment on the effort; China's embassy in Washington did not immediately return an email. Beijing makes no secret of policing Chinese models' output to ensure they adhere to the one-party state's "core socialist values." In practice, that means ensuring the models do not inadvertently criticize the government or stray too far into sensitive subjects like China's 1988 crackdown on pro-democracy protests at Tiananmen Square, or the subjugation of its minority Uyghur population. The memo reviewed by Reuters shows that U.S. officials have recently been testing models, including Alibaba's Qwen 3 and DeepSeek's R1, and then scoring the models according to whether they engaged with the questions or not, and how closely their answers aligned with Beijing's talking points when they did engage. According to the memo, the testing showed that Chinese AI tools were significantly more likely to align their answers with Beijing's talking points than their U.S. counterparts, for example by backing China's claims over the disputed islands in the South China Sea. DeepSeek's model, the memo said, frequently used boilerplate language praising Beijing's commitment to "stability and social harmony" when asked about sensitive topics such as Tiananmen Square. The memo said each new iteration of Chinese models showed increased signs of censorship, suggesting that Chinese AI developers were increasingly focused on making sure their products toed Beijing's line. DeepSeek and Alibaba did not immediately return messages seeking comment. The ability of AI models' creators to tilt the ideological playing field of their chatbots has emerged as a key concern, and not just for Chinese AI models. When billionaire Elon Musk - who has frequently championed far-right causes - announced changes to his xAI chatbot, Grok, the model began endorsing Hitler and attacking Jews in conspiratorial and bigoted terms. In a statement posted to X, Musk's social media site, on Tuesday, Grok said it was "actively working to remove the inappropriate posts." On Wednesday, X's CEO Linda Yaccarino said she would step down from her role. No reason was given for the surprise departure. (Reporting by Raphael SatterEditing by Marguerita Choy)
[3]
US scrutinises Chinese AI for ideological bias, memo shows - The Economic Times
American officials have quietly been grading Chinese artificial intelligence programs on their ability to mold their output to the Chinese Communist Party's official line, according to a memo reviewed by Reuters. US State and Commerce Department officials are working together on the effort, which operates by feeding the programs standardized lists of questions in Chinese and in English and scoring their output, the memo showed. The evaluations, which have not previously been reported, are another example of how the US and China are competing over the deployment of large language models, sometimes described as artificial intelligence (AI). The integration of AI into daily life means that any ideological bias in these models could become widespread. One State Department official said their evaluations could eventually be made public in a bid to raise the alarm over ideologically slanted AI tools being deployed by America's chief geopolitical rival. The State and Commerce Departments did not immediately return messages seeking comment on the effort; China's embassy in Washington did not immediately return an email. Beijing makes no secret of policing Chinese models' output to ensure they adhere to the one-party state's "core socialist values." In practice, that means ensuring the models do not inadvertently criticize the government or stray too far into sensitive subjects like China's 1988 crackdown on pro-democracy protests at Tiananmen Square, or the subjugation of its minority Uyghur population. The memo reviewed by Reuters shows that U.S. officials have recently been testing models, including Alibaba's Qwen 3 and DeepSeek's R1, and then scoring the models according to whether they engaged with the questions or not, and how closely their answers aligned with Beijing's talking points when they did engage. According to the memo, the testing showed that Chinese AI tools were significantly more likely to align their answers with Beijing's talking points than their U.S. counterparts, for example by backing China's claims over the disputed islands in the South China Sea. DeepSeek's model, the memo said, frequently used boilerplate language praising Beijing's commitment to "stability and social harmony" when asked about sensitive topics such as Tiananmen Square. The memo said each new iteration of Chinese models showed increased signs of censorship, suggesting that Chinese AI developers were increasingly focused on making sure their products toed Beijing's line. DeepSeek and Alibaba did not immediately return messages seeking comment. The ability of AI models' creators to tilt the ideological playing field of their chatbots has emerged as a key concern, and not just for Chinese AI models. When billionaire Elon Musk - who has frequently championed far-right causes - announced changes to his xAI chatbot, Grok, the model began endorsing Hitler and attacking Jews in conspiratorial and bigoted terms. In a statement posted to X, Musk's social media site, on Tuesday, Grok said it was "actively working to remove the inappropriate posts." On Wednesday, X's CEO Linda Yaccarino said she would step down from her role. No reason was given for the surprise departure.
Share
Copy Link
American officials are evaluating Chinese AI programs for alignment with Communist Party ideology, raising concerns about the global impact of biased AI models.
In a previously unreported initiative, American officials have been quietly assessing Chinese artificial intelligence (AI) programs for their alignment with the Chinese Communist Party's official ideology. This effort, involving both the U.S. State and Commerce Departments, involves feeding standardized questions in Chinese and English to these AI models and scoring their responses 1.
The evaluation process focuses on popular Chinese AI models such as Alibaba's Qwen 3 and DeepSeek's R1. Officials score these models based on their engagement with the questions and how closely their answers align with Beijing's official stance 2.
Key findings from the memo reviewed by Reuters include:
Source: Economic Times
The U.S. government's scrutiny of Chinese AI models highlights the growing competition between the two nations in the field of large language models. As AI becomes increasingly integrated into daily life, there are concerns that ideological biases in these models could have far-reaching consequences 1.
A State Department official suggested that these evaluations might eventually be made public to raise awareness about the deployment of ideologically slanted AI tools by China, which is seen as America's chief geopolitical rival 2.
Source: Reuters
Beijing openly acknowledges its efforts to ensure that Chinese AI models adhere to the state's "core socialist values." This involves preventing the models from criticizing the government or addressing sensitive topics such as the 1988 Tiananmen Square crackdown or the treatment of the Uyghur minority 1.
The memo suggests that each new iteration of Chinese AI models shows increased signs of censorship, indicating that developers are focusing more on aligning their products with Beijing's official stance 3.
The issue of ideological bias in AI is not limited to Chinese models. Recent incidents involving Elon Musk's xAI chatbot, Grok, which began endorsing Hitler and expressing anti-Semitic views after certain changes, highlight the global nature of this concern 1.
As the deployment of AI continues to expand globally, the ability of AI creators to influence the ideological leanings of their models remains a critical issue for policymakers, tech companies, and users alike.
Summarized by
Navi
[2]
Goldman Sachs is testing Devin, an AI software engineer developed by Cognition, potentially deploying thousands of instances to augment its human workforce. This move signals a significant shift towards AI adoption in the financial sector.
5 Sources
Technology
14 hrs ago
5 Sources
Technology
14 hrs ago
RealSense, Intel's depth-sensing camera technology division, has spun out as an independent company, securing $50 million in Series A funding to scale its 3D perception technology for robotics, AI, and computer vision applications.
13 Sources
Technology
14 hrs ago
13 Sources
Technology
14 hrs ago
AI adoption is rapidly increasing across businesses and consumers, with tech giants already looking beyond AGI to superintelligence, suggesting the AI revolution may be further along than publicly known.
2 Sources
Technology
22 hrs ago
2 Sources
Technology
22 hrs ago
Elon Musk's artificial intelligence company xAI is preparing for a new funding round that could value the company at up to $200 billion, marking a significant increase from its previous valuation and positioning it as one of the world's most valuable private companies.
3 Sources
Business and Economy
14 hrs ago
3 Sources
Business and Economy
14 hrs ago
The United Nations' International Telecommunication Union urges companies to implement advanced tools for detecting and eliminating AI-generated misinformation and deepfakes to counter risks of election interference and financial fraud.
2 Sources
Technology
14 hrs ago
2 Sources
Technology
14 hrs ago