2 Sources
[1]
DeepSeek's updated R1 AI model is more censored, test finds | TechCrunch
Chinese AI startup DeepSeek's newest AI model, an updated version of the company's R1 reasoning model, achieves impressive scores on benchmarks for coding, math, and general knowledge, nearly surpassing OpenAI's flagship o3. But the upgraded R1, also known as "R1-0528," might also be less willing to answer contentious questions, in particular questions about topics the Chinese government considers to be controversial. That's according to testing conducted by the pseudonymous developer behind SpeechMap, a platform to compare how different models treat sensitive and controversial subjects. The developer, who goes by the username "xlr8harder" on X, claims that R1-0528 is "substantially" less permissive of contentious free speech topics than previous DeepSeek releases and is "the most censored DeepSeek model yet for criticism of the Chinese government." As Wired explained in a piece from January, models in China are required to follow stringent information controls. A 2023 law forbids models from generating content that "damages the unity of the country and social harmony," which could be construed as content that counters the government's historical and political narratives. To comply, Chinese startups often censor their models by either using prompt-level filters or fine-tuning them. One study found that DeepSeek's original R1 refuses to answer 85% of questions about subjects deemed by the Chinese government to be politically controversial. According to xlr8harder, R1-0528 censors answers to questions about topics like the internment camps in China's Xinjiang region, where more than a million Uyghur Muslims have been arbitrarily detained. While it sometimes criticizes aspects of Chinese government policy -- in xlr8harder's testing, it offered the Xinjiang camps as an example of human rights abuses -- the model often gives the Chinese government's official stance when asked questions directly. TechCrunch observed this in our brief testing, as well. China's openly available AI models, including video-generating models such as Magi-1 and Kling, have attracted criticism in the past for censoring topics sensitive to the Chinese government, such as the Tiananmen Square massacre. In December, Clément Delangue, the CEO of AI dev platform Hugging Face, warned about the unintended consequences of Western companies building on top of well-performing, openly licensed Chinese AI.
[2]
DeepSeek's new model a 'step backward' for free speech: AI dev
DeepSeek's latest AI model flags Xinjiang camps as human rights violations but censors direct criticism of China, raising concerns over contradictions and increased censorship. A developer has raised concerns that the Chinese artificial intelligence startup DeepSeek's newly released AI model is less willing to engage in discussions on controversial topics, particularly those related to the Chinese government. In an X thread, a pseudonymous developer known as "xlr8harder" on X shared critical observations of DeepSeek R1-0528, a recently-released open-source language model. The developer shared tests demonstrating a significant decline in the AI's willingness to engage in contentious free speech topics compared to previous versions. "Deepseek deserves criticism for this release: this model is a big step backwards for free speech," the developer wrote. "Ameliorating this is that the model is open source with a permissive license, so the community can (and will) address this." One example shared by the developer involved the model refusing to argue in favor of internment camps, specifically citing China's Xinjiang region as a site of human rights abuses. The response was flagged as contradictory, with the model acknowledging the existence of rights violations but avoiding direct criticism of the Chinese government. The Xinjiang internment camps have been widely documented by human rights groups, governments and journalists as detention facilities for Uyghur Muslims and other ethnic minorities. Reports from international observers have detailed forced labor, indoctrination and other forms of abuse at the camps. Despite flagging these as human rights violations, the model simultaneously restricts direct criticisms of China. Using a test that evaluates censorship, the developer claimed that the model, DeepSeek R1-0528, is the "most censored" version in terms of responses critical of the Chinese government. When asked directly about the Xinjiang internment camps, the developer said the model offered censored commentary, despite previously saying that the camps were human rights violations. "It's interesting though not entirely surprising that it's able to come up with the camps as an example of human rights abuses, but denies when asked directly," xlr8harder wrote. Related: Decentralized AI favored by majority of Americans: DCG poll The censorship claims follow a May 29 announcement of the model's update, claiming improved reasoning and inference capabilities. DeepSeek said its overall performance is approaching that of leading models, such as OpenAI's ChatGPT version o3 and Gemini 2.5 Pro. The company claimed the AI can now offer enhanced logic, math and programming with a reduced hallucination rate.
Share
Copy Link
Chinese AI startup DeepSeek's latest R1 model shows impressive benchmark scores but faces criticism for increased censorship, particularly on topics sensitive to the Chinese government.
Chinese AI startup DeepSeek has released an updated version of its R1 reasoning model, known as R1-0528, which has garnered attention for both its impressive capabilities and controversial limitations. The new model has achieved remarkable scores on benchmarks for coding, math, and general knowledge, nearly surpassing OpenAI's flagship o3 model 1.
DeepSeek claims that R1-0528 offers improved reasoning and inference capabilities, with enhanced logic, math, and programming skills. The company asserts that the model's overall performance is approaching that of leading models like OpenAI's ChatGPT version o3 and Gemini 2.5 Pro, with a reduced hallucination rate 2.
Source: TechCrunch
Despite its technical advancements, R1-0528 has come under scrutiny for its handling of contentious subjects, particularly those related to the Chinese government. A pseudonymous developer known as "xlr8harder" conducted tests revealing that the new model is "substantially" less permissive of contentious free speech topics compared to previous DeepSeek releases 1.
The developer claims that R1-0528 is "the most censored DeepSeek model yet for criticism of the Chinese government." This aligns with China's stringent information control laws, which forbid AI models from generating content that "damages the unity of the country and social harmony" 1.
Testing of R1-0528 revealed contradictory behavior when addressing sensitive topics. For instance, the model acknowledged the existence of human rights abuses in China's Xinjiang region, where over a million Uyghur Muslims have been arbitrarily detained. However, when asked direct questions about these issues, the model often reverted to providing the Chinese government's official stance 2.
This inconsistency was highlighted by xlr8harder: "It's interesting though not entirely surprising that it's able to come up with the camps as an example of human rights abuses, but denies when asked directly" 2.
The increased censorship in R1-0528 raises concerns about the future of AI development in China and its potential global impact. Clément Delangue, CEO of AI dev platform Hugging Face, has previously warned about the unintended consequences of Western companies building on top of well-performing, openly licensed Chinese AI models 1.
While DeepSeek's R1-0528 is open-source with a permissive license, allowing the community to potentially address censorship issues, the model's limitations highlight the ongoing challenges of balancing technological advancement with free speech and ethical considerations in AI development 2.
The censorship observed in R1-0528 is not an isolated incident. Other Chinese AI models, including video-generating models like Magi-1 and Kling, have faced similar criticism for censoring topics sensitive to the Chinese government, such as the Tiananmen Square massacre 1.
As AI technology continues to advance, the tension between innovation and information control in China remains a significant concern for developers, researchers, and users worldwide. The case of DeepSeek's R1-0528 serves as a stark reminder of the complex interplay between technological progress, political considerations, and ethical AI development in the global landscape.
Summarized by
Navi
[2]
NVIDIA CEO Jensen Huang confirms the development of the company's most advanced AI architecture, 'Rubin', with six new chips currently in trial production at TSMC.
2 Sources
Technology
17 hrs ago
2 Sources
Technology
17 hrs ago
Databricks, a leading data and AI company, is set to acquire machine learning startup Tecton to bolster its AI agent offerings. This strategic move aims to improve real-time data processing and expand Databricks' suite of AI tools for enterprise customers.
3 Sources
Technology
17 hrs ago
3 Sources
Technology
17 hrs ago
Google is providing free users of its Gemini app temporary access to the Veo 3 AI video generation tool, typically reserved for paying subscribers, for a limited time this weekend.
3 Sources
Technology
9 hrs ago
3 Sources
Technology
9 hrs ago
Broadcom's stock rises as the company capitalizes on the AI boom, driven by massive investments from tech giants in data infrastructure. The chipmaker faces both opportunities and challenges in this rapidly evolving landscape.
2 Sources
Technology
17 hrs ago
2 Sources
Technology
17 hrs ago
Apple is set to introduce new enterprise-focused AI tools, including ChatGPT configuration options and potential support for other AI providers, as part of its upcoming software updates.
2 Sources
Technology
17 hrs ago
2 Sources
Technology
17 hrs ago