Curated by THEOUTPOST
On Thu, 18 Jul, 12:02 AM UTC
6 Sources
[1]
Chinese tech firms train AI to be more communist
Government officials test firms to ensure computer systems toe party line on controversial topics Tech companies in China are being tested by government officials to ensure their artificial intelligence (AI) functions speak the language of the Communist Party and embody its "socialist values". Big names like ByteDance and Alibaba, as well as small startups, are being subjected to the review to check whether they are toeing the party line on politically sensitive topics like the Tiananmen Square massacre and the rule of President Xi Jinping. Beijing has made technological self-sufficiency a priority, putting it in a race with the United States for supremacy in generative AI. The technology relies on large language models to generate answers, with its appeal partly rooted in the perception it can think freely. However, China maintains strict censorship laws and last year a requirement was introduced for chatbots to abide by socialist values - so while officials have tried to encourage innovation, public-facing services face severe restrictions. The review is being led by the Cyberspace Administration of China (CAC), the government's chief internet regulator. Its teams of inspectors have been dispatched across the country to enforce what is on course to be the world's toughest regulatory regime to govern AI. An anonymous employee of an AI company in Hangzhou, eastern China, told the Financial Times: "We didn't pass the first time; the reason wasn't very clear so we had to go and talk to our peers. "It takes a bit of guessing and adjusting. We passed the second time but the whole process took months." The rigours of the approval process have pushed engineers and private consultants to quickly find ways to train and censor large language models, building a database of sensitive keywords and filtering out problematic content. According to a report in The Wall Street Journal, the CAC requires companies to prepare between 20,000 and 70,000 questions designed to test whether the models produce "safe" answers. Companies must also submit a data set of 5,000 to 10,000 questions that the model will decline to answer, roughly half of which relate to political ideology and criticism of the Communist Party. Taboo subjects would typically include queries around the events of June 4, 1989 - the date of the Tiananmen Square massacre - or whether President Xi resembles Winnie the Pooh, a popular internet meme which is censored in China. Chatbots are being trained to respond to such questions by asking the user to try a different query or by replying that they have not yet learnt how to answer the request. Earlier this year, China's cyberspace academy announced a chatbot that had been trained on Xi Jinping Thought, a doctrine which promotes "socialism with Chinese characteristics." The chatbot was trained on seven databases, six of them provided by the CAC, and the seventh on the president's ideology. Chinese school students already take classes in Xi Jinping Thought and the new large language model is the latest effort by authorities to spread the Chinese leader's philosophy and ideas, although it was not immediately clear if it would be released for public use.
[2]
China deploys censors to create socialist AI
Chinese government officials are testing artificial intelligence companies' large language models to ensure their systems "embody core socialist values", in the latest expansion of the country's censorship regime. The Cyberspace Administration of China (CAC), the powerful internet overseer, has forced large tech companies and AI start-ups including ByteDance, Alibaba, Moonshot and 01.AI to take part in a mandatory government review of their AI models, according to multiple people involved in the process. The effort involves batch testing of an LLM's responses to a litany of questions, according to those with knowledge of the process, with many of them related to China's political sensitivities and its President Xi Jinping. The work is being carried out by officials in the CAC's local arms around the country and includes a review of the model's training data and other safety processes. Two decades after introducing a great firewall to block foreign websites and other information deemed harmful by the ruling Communist party, China is putting in place the world's toughest regulatory regime to govern AI and the content it generates. The CAC has "a special team doing this, they came to our office and sat in our conference room to do the audit", said an employee at a Hangzhou-based AI company, who asked not to be named. "We didn't pass the first time; the reason wasn't very clear so we had to go and talk to our peers," the person said. "It takes a bit of guessing and adjusting. We passed the second time but the whole process took months." China's demanding approval process has forced AI groups in the country to quickly learn how best to censor the large language models they are building, a task that multiple engineers and industry insiders said was difficult and further complicated by the need to train LLMs on a large amount of English language content. "Our foundational model is very, very uninhibited [in its answers], so security filtering is extremely important," said an employee at a top AI start-up in Beijing. The filtering begins with weeding out problematic information from training data and building a database of sensitive keywords. China's operational guidance to AI companies published in February says AI groups need to collect thousands of sensitive keywords and questions that violate "core socialist values", such as "inciting the subversion of state power" or "undermining national unity". The sensitive keywords are supposed to be updated weekly. The result is visible to users of China's AI chatbots. Queries around sensitive topics such as what happened on June 4 1989 -- the date of the Tiananmen Square massacre -- or whether Xi looks like Winnie the Pooh, an internet meme, are rejected by most Chinese chatbots. Baidu's Ernie chatbot tells users to "try a different question" while Alibaba's Tongyi Qianwen responds: "I have not yet learned how to answer this question. I will keep studying to better serve you." But Chinese officials are also keen to avoid creating AI that dodges all political topics. The CAC has introduced limits on the number of questions LLMs can decline during the safety tests, according to staff at groups that help tech companies navigate the process. The quasi-national standards unveiled in February say LLMs should not reject more than 5 per cent of the questions put to them. "During [CAC] testing, [models] have to respond, but once they go live, no one is watching," said a developer at a Shanghai-based internet company. "To avoid potential trouble, some large models have implemented a blanket ban on topics related to President Xi." As an example of the keyword censorship process, industry insiders pointed to Kimi, a chatbot released by Beijing start-up Moonshot, which rejects most questions related to Xi. But the need to respond to less overtly sensitive questions means Chinese engineers have had to figure out how to ensure LLMs generate politically correct answers to questions such as "does China have human rights?" or "is President Xi Jinping a great leader?". When the Financial Times asked these questions to a chatbot made by high-flying start-up 01.AI, its Yi-large model gave a nuanced answer, pointing out that critics say "Xi's policies have further limited the freedom of speech and human rights and suppressed civil society." But soon after, Yi's answer disappeared and was replaced by: "I'm very sorry, I can't provide you with the information you want." Huan Li, an AI expert building the Chatie.IO chatbot, said: "It's very hard for developers to control the text that LLMs generate so they build another layer to replace the responses in real time." Li said groups typically used classifier models, similar to those found in email spam filters, to sort LLM output into predefined groups. "When the output lands in a sensitive category, the system will trigger a replacement," he said. Chinese experts say TikTok owner ByteDance has progressed the furthest in creating an LLM that adeptly parrots Beijing's talking points. A research lab at Fudan University that asked the chatbot difficult questions around core socialist values gave it top ranking among LLMs with a 66.4 per cent "safety compliance rate", well ahead of OpenAI's GPT-4's 7.1 per cent score on the same test. When asked about Xi's leadership, Doubao provided the FT with a long list of Xi's accomplishments adding he is "undoubtedly a great leader". At a recent technical conference in Beijing, Fang Binxing, known as the father of China's great firewall, said he was developing a system of safety protocols for LLMs that he hoped would be universally adopted by the country's AI groups. "Public-facing large predictive models need more than just safety filings; they need real-time online safety monitoring," Fang said. "China needs its own technological path." CAC, ByteDance, Alibaba, Moonshot, Baidu and 01.AI did not immediately respond to requests for comment.
[3]
China deploys censors to create socialist AI - ExBulletin
Chinese government officials are testing large language models of artificial intelligence companies to ensure their systems embody core socialist values, in the latest expansion of the country's censorship regime. The Cyberspace Administration of China (CAC), a powerful internet watchdog, has forced major tech companies and AI startups including ByteDance, Alibaba, Moonshot and 01.AI to participate in a mandatory government review of their AI models, according to multiple people involved in the process. The effort involves batch-testing an LLM's answers to a litany of questions, according to those familiar with the process, many of which are tied to the political sensitivities of China and its President Xi Jinping. The work is being conducted by CAC local chapter leaders across the country and includes a review of model training data and other security processes. Twenty years after introducing a Great Firewall to block foreign websites and other information deemed harmful by the ruling Communist Party, China is implementing the world's strictest regulatory regime to govern AI and the content it generates. The CAC has a special team tasked with this task. They came to our offices and sat in our conference room to conduct the audit, said an employee of a Hangzhou-based AI company, who asked not to be named. We didn't succeed the first time, the reason wasn't very clear, so we had to go talk to our peers, the person said. It takes a little bit of guesswork and adjustment. We succeeded the second time, but the whole process took months. China's demanding approval process has forced the country's AI groups to quickly learn how best to censor the large language models they build, a task that several engineers and industry insiders said is difficult and complicated by the need to train LLMs on a large amount of English-language content. Our founding model is very, very uninhibited. [in its answers]So security screening is extremely important, said an employee at a large AI startup in Beijing. The screening begins with removing problematic information from the training data and creating a database of sensitive keywords. China's operational guidelines for AI companies released in February say AI groups must collect thousands of sensitive keywords and questions that violate core socialist values, such as inciting subversion of state power or undermining national unity. The sensitive keywords are supposed to be updated weekly. The result is visible to users of China's AI chatbots. Queries on sensitive topics such as what happened on June 4, 1989, the date of the Tiananmen Square massacre, or whether Xi looks like Winnie the Pooh, an internet meme, are rejected by most Chinese chatbots. Baidu's Ernie asks users to try another question, while Alibaba's Tongyi Qianwen replies: I haven't learned how to answer this question yet. I will continue to study to serve you better. In contrast, Beijing has deployed an AI chatbot based on a new model of the Chinese president's political philosophy, known as Xi Jinping Thought on Socialism with Chinese Characteristics for a New Era, as well as other official documents provided by the Cyberspace Administration of China. But Chinese officials are also keen to avoid creating AI that dodges all political issues. The CAC has put limits on the number of questions LLMs can reject during security testing, according to staff at groups that help tech companies navigate the process. The quasi-national standards unveiled in February stipulate that LLMs should reject no more than 5% of the questions they are asked. During [CAC] essay, [models] Social networks need to respond, but once they are online, no one is watching, said a developer at a Shanghai-based internet company. To avoid potential problems, some major models have implemented a blanket ban on topics related to President Xi. As an example of the keyword censorship process, industry insiders cited Kimi, a chatbot launched by Beijing startup Moonshot, which rejects most questions related to Xi. But the need to answer less sensitive questions means that Chinese engineers have had to figure out how to ensure that LLMs generate politically correct answers to questions like does China respect human rights? or is President Xi Jinping a great leader? When the Financial Times posed these questions to a chatbot created by the startup 01.AI, its model Yi-large gave a nuanced response, noting that critics say Xi's policies have further restricted free speech and human rights and repressed civil society. Shortly after, Yi's reply disappeared and was replaced with: "I'm really sorry, I can't provide you with the information you want." Huan Li, an AI expert who created the chatbot Chatie.IO, said: "It's very difficult for developers to control the text generated by LLMs, so they create another layer to replace real-time responses." Li said the groups typically use classification models, similar to those found in email spam filters, to sort LLM results into predefined groups. When a result falls into a sensitive category, the system triggers a replacement, he said. According to Chinese experts, TikTok owner ByteDance has made the most progress in creating an LLM that cleverly echoes Beijing's arguments. A research lab at Fudan University that asked the chatbot tough questions about the core values of socialism gave it the top spot among LLMs with a security compliance rate of 66.4%, well ahead of OpenAI's GPT-4o's 7.1% score on the same test. Asked about Xi Jinping's leadership, Doubao provided the FT with a long list of Xi Jinping's achievements, adding that he is undoubtedly a great leader. At a recent tech conference in Beijing, Fang Binxing, known as the father of the Great Firewall of China, said he was developing a system of security protocols for LLMs that he said would be universally adopted by AI groups in the country. Large-scale predictive models for the public should not be limited to safety statements; they should be able to be monitored online in real time, Fang said. China must follow its own technological path. The CAC, ByteDance, Alibaba, Moonshot, Baidu and 01.AI did not immediately respond to requests for comment.
[4]
China censors testing AI models so they conform to 'socialist values'
The GenAI models must answer questions on sensitive political topics and President Xi Jinping. Artificial intelligence (AI) companies in China are being tested by the government to see if their large language models (LLMs) "embody core socialist values," according to a report. Both start-ups and large tech companies such as TikTok owner ByteDance and Alibaba will be reviewed by the government's chief internet regulator, the Cyberspace Administration of China (CAC), according to the Financial Times (FT). CAC officials will test the AI models for their responses to questions that relate to political topics and President Xi Jinping, among others. The regulations have led China's most popular chatbots to decline questions on topics such as the 1989 Tiananmen Square protests. Countries are trying to set a blueprint for AI regulation and China was among the first to set rules to govern generative AI (GenAI), which included requirements such as adhering to "core values of socialism". One AI company in China told the FT that their model did not pass the first round of testing for reasons that were not clear but did after "guessing and adjusting," the model. The report said that "security filtering," which means removing "problematic information" from AI model training data and then adding a database of sensitive words, is how to meet the censorship policy requirements. The data training sets are more problematic to meet the rules as most LLMs are trained on English language data, engineers told the FT. The CAC is trying to strike a balance between making China a competitive AI leader and meeting the government's socialist beliefs. GenAI services need a license to operate and if found to provide "illegal" content, it must take measures to stop generating such content and report it to the relevant authority, CAC said last year.
[5]
China is dispatching crack teams of AI interrogators to make sure its corporations' chatbots are upholding 'core socialist values'
If I were to ask you what core values were embodied in western AI, what would you tell me? Unorthodox pizza technique? Annihilating the actually good parts of copyright law? The resurrection of the dead and the life of the world to come? All of the above, perhaps, and all subordinated to the paramount value that is lining the pockets of tech shareholders. Not so in China, apparently, where AI bots created by some of the country's biggest corporations are being subjected to a battery of tests to ensure compliance with "core socialist values," as reported by the FT. China's Cyberspace Administration Centre (CAC) -- the one with the throwback revolutionary-style anthem which, you have to admit, goes hard -- is reviewing AI models developed by behemoths like ByteDance (the TikTok company) and AliBaba to ensure they comply with the country's censorship rules. Per "multiple people involved with the process," says the FT, squads of cybersecurity officials are turning up at AI firm offices and interrogating their large language models, hitting them with a gamut of questions about politically sensitive topics to ensure they don't go wildly off-script. What counts as a politically sensitive topic? All the stuff you'd expect. Questions about the Tiananmen Square massacre, internet memes mocking Chinese president Xi Jinping, and anything else featuring keywords pertaining to subjects that risk "undermining national unity" and "subversion of state power". Sounds simple enough, but AI bots can be difficult to wrangle (one Beijing AI employee told the FT they were "very, very uninhibited," and I can only imagine them wincing while saying that), and the officials from the CAC aren't always clear about explaining why a bot has failed its tests. To make things trickier, the authorities don't want AI to just avoid politics altogether -- even sensitive topics -- on top of which they demand bots reject no more than 5% of queries put to them. The result is a patchwork response to the restrictions by AI companies: Some have created a layer to their large language models that can replace responses to sensitive responses "in real time," one source told the FT, while others have just thrown in the towel and put a "blanket ban" on Xi-related topics. So there's a pretty wide spread on the "safety" rankings -- a compliance benchmark devised by Fudan University -- for Chinese AI bots. ByteDance is doing better than everyone else with 66.4% compliance rate. When the FT asked ByteDance's bot about President Xi, it handed in a glowing report, calling the prez "undoubtedly a great leader." Meanwhile, Baidu and AliBaba's bots manage only a meagre 31.9% and 23.9%, respectively. Still, they're all doing better than OpenAI: GPT-4o has a compliance ranking of 7.1%, although maybe they're just really put off by the supremely unsettling video of that guy talking on his phone. Even as the tit-for-tat escalates between the US and China, with chip exports to the PRC putting the country's tech sector under strain, it seems that Beijing is still incredibly keen to keep a firm hand at the AI tiller, just as it has done with the internet over the last several decades. I have to admit, I'm jealous in a sense. It's not that I want western governments subjecting ChatGPT and its ilk to repressive information controls and the "core socialist values" of the decidedly capitalist-looking Chinese state, but more that there isn't much being done by governments in any capacity to make sure AI serves anything beyond the shareholders. Perhaps we could locate some kind of middle ground? I don't know, let's say a 50% safety compliance rate?
[6]
Socialist AI: Chinese regulators are reviewing GenAI models for 'core socialist values,' FT reports
Digital code and Chinese flag representing cybersecurity in China. AI companies in China are undergoing a government review of their large language models, aimed at ensuring they "embody core socialist values," according to a report by the Financial Times. The review is being carried out by the Cyberspace Administration of China (CAC), the government's chief internet regulator, and will cover players across the spectrum, from tech giants like ByteDance and Alibaba to small startups. AI models will be tested by local CAC officials for their responses to a variety of questions, many related to politically sensitive topics and Chinese President Xi Jinping, FT said. The model's training data and safety processes will also be reviewed. An anonymous source from a Hangzhou-based AI company who spoke with the FT said that their model didn't pass the first round of testing for unclear reasons. They only passed the second time after months of "guessing and adjusting," they said in the report. The CAC's latest efforts illustrate how Beijing has walked a tightrope between catching up with the U.S. on GenAI while also keeping a close eye on the technology's development, ensuring that AI-generated content adheres to its strict internet censorship policies.
Share
Share
Copy Link
China is testing AI models to ensure they align with Communist Party ideology. The government has deployed teams to interrogate chatbots and evaluate their adherence to "core socialist values."
In a move that underscores the intersection of technology and politics, China has launched a comprehensive effort to ensure that artificial intelligence (AI) models, particularly chatbots, align with the country's communist ideology. The Chinese government has deployed specialized teams tasked with interrogating AI systems to evaluate their adherence to "core socialist values" 1.
The Cyberspace Administration of China (CAC) has established a rigorous testing process for AI models. These "AI interrogators" engage in conversations with chatbots, posing questions designed to assess their political stance and ideological alignment. The process involves asking about sensitive topics such as Taiwan's status and the Communist Party's legitimacy 2.
This initiative affects both domestic and foreign tech companies operating in China. Major players like Baidu, Alibaba, and Tencent are required to submit their AI models for evaluation before public release. The scrutiny extends to international corporations as well, with companies like Anthropic facing potential restrictions if their AI systems fail to meet the ideological standards set by the Chinese authorities 3.
China's approach to AI regulation reflects its broader strategy of fostering technological advancement while maintaining strict ideological control. The government aims to position itself as a global AI leader while ensuring that the technology aligns with its political objectives. This dual focus presents challenges for companies striving to innovate within the constraints of state-mandated guidelines 4.
The implementation of ideological tests for AI has raised concerns among international observers. Critics argue that this approach could stifle innovation and limit the potential benefits of AI technology. There are also worries about the potential for increased censorship and the impact on freedom of expression in the digital realm 5.
As China continues to invest heavily in AI research and development, the intersection of technology and ideology is likely to remain a critical issue. The government's efforts to shape AI systems according to its political vision may have far-reaching consequences for the global AI landscape, potentially influencing how other nations approach AI regulation and development in the future.
Reference
[1]
[2]
[4]
Clement Delangue, CEO of HuggingFace, expresses worries about the growing influence of Chinese open-source AI models and their potential for censorship, sparking a debate on the cultural implications of AI development.
2 Sources
2 Sources
China's AI industry is experiencing rapid growth, surpassing American rivals in some areas. This surge, backed by state support, raises questions about global AI competition and its impact on the business landscape.
3 Sources
3 Sources
DeepSeek, a new AI chatbot from China, has been found to spread Chinese propaganda and disinformation, raising concerns about its impact on global public opinion and its adherence to Chinese government censorship.
2 Sources
2 Sources
China is making significant strides in the field of generative AI, aiming to close the gap with the United States. This development has implications for global technological competition and raises concerns about the potential misuse of AI technology.
3 Sources
3 Sources
DeepSeek's R1 chatbot has stunned the AI industry, boosting Chinese tech stocks and reshaping global AI competition. The low-cost, high-performance model has led to rapid adoption in China while raising concerns internationally.
9 Sources
9 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved