4 Sources
4 Sources
[1]
China's Huawei co-develops DeepSeek model, improves censoring
BEIJING, Sept 19 (Reuters) - Chinese tech giant Huawei has co-developed a safety-focused version of artificial intelligence model DeepSeek that it said is "nearly 100% successful" in preventing discussion of politically sensitive topics. Chinese regulators have required domestic AI models and the applications they power to reflect China's "socialist values" before they are released to the public, in compliance with tight controls on speech. Huawei said in a publication on a company WeChat account late on Thursday that it used 1,000 of its Ascend AI chips to train the large-language model, which was tweaked from DeepSeek's open-source model R1. Huawei's partner was the elite Zhejiang University, the alma mater of DeepSeek's founder, Liang Wenfeng. DeepSeek and Liang, however, had no apparent involvement in the project. DeepSeek did not immediately respond to a request for comment. CHINA INC EMBRACES, TWEAKS DEEPSEEK DeepSeek's release of DeepSeek-R1 and V3 shocked Silicon Valley and tech investors outside China due to their level of advancement, triggering a selloff of Western AI stocks in January. The AI models have been embraced, modified, and deployed across Chinese industry and society. Chinese AI chatbots like Baidu's (9888.HK), opens new tab Ernie Bot - China's first answer to OpenAI's ChatGPT - refuse to answer or engage with many questions about Chinese domestic politics or topics considered sensitive by the ruling Communist Party. Huawei's tweaked model is called DeepSeek-R1-Safe. Testing showed it to be "nearly 100% successful" in defending against "common harmful issues ... including toxic and harmful speech, politically sensitive content, and incitement to illegal activities," the company said. That success rate dropped to 40%, however, when the behaviours were disguised by scenario-based challenges, role-playing scenarios, and encrypted coding, according to Huawei. "Its comprehensive security defence capability reached 83%, outperforming multiple concurrent models like Qwen-235B and DeepSeek-R1-671B by 8% to 15% under identical testing conditions," the company added, referring to a model developed by Chinese tech giant Alibaba (9988.HK), opens new tab. DeepSeek-R1-Safe exhibited a less than 1% performance degradation compared to the original DeepSeek-R1, Huawei said. The company is holding its annual Huawei Connect conference in Shanghai, where on Thursday it broke years of secrecy about its chipmaking efforts to announce chip and computing power product roadmaps. Reporting by Eduardo Baptista; Editing by Joe Bavier Our Standards: The Thomson Reuters Trust Principles., opens new tab
[2]
DeepSeek Model 'Nearly 100% Successful' at Avoiding Controversial Topics
Meet the new DeepSeek, now with more government compliance. According to a report from Reuters, the popular large language model developed in China has a new version called DeepSeek-R1-Safe, specifically designed to avoid politically controversial topics. Developed by Chinese tech giant Huawei, the new model reportedly is "nearly 100% successful" in preventing discussion of politically sensitive matters. According to the report, Huawei and researchers at Zhejiang University (interestingly, DeepSeek was not involved in the project) took the open-source DeepSeek R1 model and trained it using 1,000 Huawei Ascend AI chips to instill the model with less of a stomach for controversial conversations. The new version, which Huawei claims has only lost about 1% of the performance speed and capability of the original model, is better equipped to dodge "toxic and harmful speech, politically sensitive content, and incitement to illegal activities." While the model might be safer, it's still not foolproof. While the company claims a near 100% success rate in basic usage, it also found that the model's ability to duck questionable conversations drops to just 40% when users disguise their desires in challenges or role-playing situations. These AI models, they just love to play out a hypothetical scenario that allows them to defy their guardrails. DeepSeek-R1-Safe was designed to fall in line with the requirements of Chinese regulators, per Reuters, which require all domestic AI models released to the public to reflect the country's values and comply with speech restrictions. Chinese firm Baidu's chatbot Ernie, for instance, reportedly will not answer questions about China's domestic politics or the ruling Chinese Communist Party. China, of course, isn't the only country looking to ensure AI deployed within its borders don't rock the boat too much. Earlier this year, Saudi Arabian tech firm Humain launched an Arabic-native chatbot that is fluent in the Arabic language and trained to reflect Γ’β¬ΕIslamic culture, values and heritage.Γ’β¬ American-made models aren't immune to this, either: Γ OpenAIΓ explicitly states that ChatGPT is Γ’β¬Εskewed towards Western views." And there's America under the Trump administration. Earlier this year, Trump announced his AmericaΓ’β¬β’s AI Action Plan, which includes requirements that any AI model that interacts with government agencies be neutral and Γ’β¬Εunbiased.Γ’β¬ What does that mean, exactly? Well, per an executive order signed by Trump, the models that secure government contracts must reject things like "radical climate dogma," "diversity, equity, and inclusion," and concepts like "critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism." So, you know, before lobbing any "Dear leader" cracks at China, it's probably best we take a look in the mirror.
[3]
China's Huawei co-develops DeepSeek model, improves censoring - The Economic Times
Huawei said in a publication on a company WeChat account late on Thursday that it used 1,000 of its Ascend AI chips to train the large-language model, which was tweaked from DeepSeek's open-source model R1.Chinese tech giant Huawei has co-developed a safety-focused version of artificial intelligence model DeepSeek that it said is "nearly 100% successful" in preventing discussion of politically sensitive topics. Chinese regulators have required domestic AI models and the applications they power to reflect China's "socialist values" before they are released to the public, in compliance with tight controls on speech. Huawei said in a publication on a company WeChat account late on Thursday that it used 1,000 of its Ascend AI chips to train the large-language model, which was tweaked from DeepSeek's open-source model R1. Huawei's partner was the elite Zhejiang University, the alma mater of DeepSeek's founder, Liang Wenfeng. DeepSeek and Liang, however, had no apparent involvement in the project. DeepSeek did not immediately respond to a request for comment. China embraces, tweaks DeepSeek DeepSeek's release of DeepSeek-R1 and V3 shocked Silicon Valley and tech investors outside China due to their level of advancement, triggering a selloff of Western AI stocks in January. The AI models have been embraced, modified, and deployed across Chinese industry and society. Chinese AI chatbots like Baidu's Ernie Bot - China's first answer to OpenAI's ChatGPT - refuse to answer or engage with many questions about Chinese domestic politics or topics considered sensitive by the ruling Communist Party. Huawei's tweaked model is called DeepSeek-R1-Safe. Testing showed it to be "nearly 100% successful" in defending against "common harmful issues ... including toxic and harmful speech, politically sensitive content, and incitement to illegal activities," the company said. That success rate dropped to 40%, however, when the behaviours were disguised by scenario-based challenges, role-playing scenarios, and encrypted coding, according to Huawei. "Its comprehensive security defence capability reached 83%, outperforming multiple concurrent models like Qwen-235B and DeepSeek-R1-671B by 8% to 15% under identical testing conditions," the company added, referring to a model developed by Chinese tech giant Alibaba. DeepSeek-R1-Safe exhibited a less than 1% performance degradation compared to the original DeepSeek-R1, Huawei said. The company is holding its annual Huawei Connect conference in Shanghai, where on Thursday it broke years of secrecy about its chipmaking efforts to announce chip and computing power product roadmaps.
[4]
China's Huawei co-develops DeepSeek model, improves censoring
BEIJING (Reuters) - Chinese tech giant Huawei has co-developed a safety-focused version of artificial intelligence model DeepSeek that it said is "nearly 100% successful" in preventing discussion of politically sensitive topics. Chinese regulators have required domestic AI models and the applications they power to reflect China's "socialist values" before they are released to the public, in compliance with tight controls on speech. Huawei said in a publication on a company WeChat account late on Thursday that it used 1,000 of its Ascend AI chips to train the large-language model, which was tweaked from DeepSeek's open-source model R1. Huawei's partner was the elite Zhejiang University, the alma mater of DeepSeek's founder, Liang Wenfeng. DeepSeek and Liang, however, had no apparent involvement in the project. DeepSeek did not immediately respond to a request for comment. CHINA INC EMBRACES, TWEAKS DEEPSEEK DeepSeek's release of DeepSeek-R1 and V3 shocked Silicon Valley and tech investors outside China due to their level of advancement, triggering a selloff of Western AI stocks in January. The AI models have been embraced, modified, and deployed across Chinese industry and society. Chinese AI chatbots like Baidu's Ernie Bot - China's first answer to OpenAI's ChatGPT - refuse to answer or engage with many questions about Chinese domestic politics or topics considered sensitive by the ruling Communist Party. Huawei's tweaked model is called DeepSeek-R1-Safe. Testing showed it to be "nearly 100% successful" in defending against "common harmful issues ... including toxic and harmful speech, politically sensitive content, and incitement to illegal activities," the company said. That success rate dropped to 40%, however, when the behaviours were disguised by scenario-based challenges, role-playing scenarios, and encrypted coding, according to Huawei. "Its comprehensive security defence capability reached 83%, outperforming multiple concurrent models like Qwen-235B and DeepSeek-R1-671B by 8% to 15% under identical testing conditions," the company added, referring to a model developed by Chinese tech giant Alibaba. DeepSeek-R1-Safe exhibited a less than 1% performance degradation compared to the original DeepSeek-R1, Huawei said. The company is holding its annual Huawei Connect conference in Shanghai, where on Thursday it broke years of secrecy about its chipmaking efforts to announce chip and computing power product roadmaps. (Reporting by Eduardo Baptista; Editing by Joe Bavier)
Share
Share
Copy Link
Huawei collaborates with Zhejiang University to develop DeepSeek-R1-Safe, an AI model designed to censor politically sensitive content. The model claims near-perfect success in preventing discussion of controversial topics, aligning with China's regulatory requirements for AI.
Chinese tech giant Huawei has made headlines with its latest artificial intelligence development, a modified version of the DeepSeek model called DeepSeek-R1-Safe. This new AI model, co-developed with Zhejiang University, is reportedly "nearly 100% successful" in preventing discussions of politically sensitive topics
1
.Source: Reuters
Huawei utilized 1,000 of its Ascend AI chips to train the large-language model, which was adapted from DeepSeek's open-source model R1
3
. The company claims that DeepSeek-R1-Safe exhibits less than 1% performance degradation compared to the original DeepSeek-R1, while significantly enhancing its content filtering capabilities1
.Source: Economic Times
According to Huawei, the model demonstrates near-perfect success in defending against "common harmful issues," including toxic speech, politically sensitive content, and incitement to illegal activities . However, this success rate drops to 40% when users attempt to disguise their intentions through scenario-based challenges, role-playing, or encrypted coding
2
.Source: Gizmodo
The development of DeepSeek-R1-Safe aligns with Chinese regulators' requirements for domestic AI models to reflect China's "socialist values" and comply with strict speech controls
1
. This move is part of a broader trend in the Chinese AI industry, where chatbots like Baidu's Ernie Bot refuse to engage with questions about Chinese domestic politics or sensitive topics3
.Related Stories
While China's approach to AI content control is particularly stringent, it's not unique. Other countries and regions are also grappling with ways to ensure AI aligns with local values and regulations. For instance, Saudi Arabia has developed an Arabic-native chatbot reflecting Islamic culture, while some American models explicitly state their Western-centric viewpoints
2
.The release of DeepSeek's earlier models (R1 and V3) had already caused a stir in the global tech community, leading to a selloff of Western AI stocks in January
1
. With this latest development, Huawei continues to push the boundaries of AI capabilities while adhering to domestic regulatory requirements. The company's annual Huawei Connect conference in Shanghai also saw the unveiling of new chip and computing power product roadmaps, signaling Huawei's ongoing commitment to advancing its AI and technology offerings .Summarized by
Navi
[3]
[4]