3 Sources
3 Sources
[1]
China's open AI models are in a dead heat with the West - here's what happens next
Much of the world may adopt the freely available Chinese technology. The US artificial intelligence startup OpenAI began with a mission of transparency in AI, a mission it abandoned in 2022 as the company began to withhold details of its technology. In the breach, Chinese companies and institutions have taken the lead. Also: A new Chinese AI model claims to outperform GPT-5 and Sonnet 4.5 - and it's free "Leadership in AI now depends not only on proprietary systems but on the reach, adoption, and normative influence of open-weight models worldwide," wrote lead author Caroline Meinhardt, a policy research manager at Stanford University's Human-Centered AI institute, HAI, in a report released last week, "Beyond DeepSeek: China's Diverse Open-Weight AI Ecosystem and its Policy Implications." (Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) "Today, Chinese-made open-weight models are unavoidable in the global competitive AI landscape," said Meinhardt and collaborators. The report shows that Chinese large language models (LLMs), such as Alibaba's Qwen family of models, are in a statistical dead heat with Anthropic's Claude large language family, another US startup, and within spitting distance of OpenAI and Google's best models. Also: As Meta fades in open-source AI, Nvidia senses its chance to lead Looking more broadly, the growing prowess of Qwen and DeepSeek and other Chinese models is fueling a "global diffusion" movement, wrote the HAI scholars: Countries around the world, but especially developing-world nations, are going to take up Chinese models as an inexpensive alternative to trying to build their own AI from scratch. The acceleration comes as the prior leader in open-source AI, Meta Platforms, has slipped in the AI rankings and now appears to be moving more toward the closed-source approach of OpenAI, Google, and Anthropic. As a result, "The widespread global adoption of Chinese open-weight models may reshape global technology access and reliance patterns, and impact AI governance, safety, and competition," according to HAI. The shot heard around the world with DeepSeek AI's R1 large language model early this year, by dint of its low cost of development, has now turned into a growing technology powerhouse from Alibaba and Asian startup firms, including Singapore-based Moonshot AI, creators of Kimi K2, and China's Z.ai, creators of GLM, noted Meinhardt and team. Also: What is DeepSeek AI? Is it safe? Here's everything you need to know China's AI labs have labored under a US export ban that restricts the country's access to the most cutting-edge technology from the US, such as Nvidia's best GPU chips. That has created a discipline that has led to increased efficiency among Chinese labs, which is now translating into solid technological progress. "Chinese open-weight models now perform at near-state-of-the-art levels across major benchmarks and leaderboards, spanning general reasoning, coding, and tool use," wrote lead author Caroline Meinhardt, a policy research manager at HAI, citing data from the popular LMArena site. And the top 22 Chinese open models are all better than OpenAI's own "open-weight" model, GPT-oss, they wrote. Although benchmarks and ratings have a number of issues, such as potential "gaming" of the scores, the authors note that other indices, such as the Epoch Capabilities Index and the Artificial Analysis Intelligence Index, "show Chinese models catching up with their US and other international counterparts." There's another measure by which Qwen and the rest are gaining: their uploads of code to the HuggingFace code hosting platform. "In September 2025, Chinese fine-tuned or derivative models made up 63% of all new fine-tuned or derivative models released on Hugging Face," the authors wrote. "Combined with anecdotal stories about adoption, these data points suggest a wide variety of contexts and geographies where Chinese models have been adopted." Also: DeepSeek may be about to shake up the AI world again - what we know Also in September, "Alibaba's Qwen model family surpassed [Meta's] Llama to become the most downloaded LLM family on Hugging Face." By those measures, "Chinese open models now appear to be pulling ahead of their US counterparts when it comes to their downstream reach," they wrote. Not only increasing technical proficiency but also greater "openness" is fueling China's rise. What constitutes an "open" AI model can vary depending on several factors. Traditionally, Meta and others only offered the "weights" of their trained AI models, such as Meta's Llama family of models. They did not disclose nor post the terabytes of training data they used. Such models are deemed "open-weight" models, but not truly open-source. Data availability is important because it enables developers to effectively employ AI models and increases the trustworthiness of their output. Also: Alibaba's Qwen AI chatbot boasts 10 million downloads in its first week - here's what it offers While data disclosure is still relatively rare, noted HAI, Chinese firms, after initial reluctance, are offering increasingly more permissive licenses for their open-weight models. "Qwen3 and DeepSeek R1 are both more capable and were released with more permissive licenses (Apache 2.0 and MIT License), allowing broad use, modification, and redistribution," they wrote. They noted that the CEO of Chinese search engine Baidu, which produces the Ernie family of models, was once "among the strongest voices in China to laud the advantages of proprietary models," but has since "made a U-turn in June 2025" by releasing the weights. As a result of their technical proficiency and greater openness, Chinese models are increasingly becoming a way for developers around the world to access free code and create efficient, tunable models for various purposes. "Distillation" refers to the process of taking an existing AI model and using it to build a smaller, more efficient model. A developer effectively leverages the large budget invested by Alibaba or another prominent developer by endowing the smaller model with the capabilities that were trained in the larger model. That distillation is now leading to "diffusion" of Chinese AI, the authors wrote. Also: AI's scary new trick: Conducting cyberattacks instead of just helping out "The wide availability of high-performing Chinese AI models opens new pathways for organizations and individuals in less computationally resourced parts of the world to access advanced AI," wrote Meinhardt and team, "thereby shaping global AI diffusion and cross-border technological reliance patterns." The authors predict the diffusion trend will sustain itself because the economic benefits outweigh continued benchmark achievements by OpenAI and the other closed frontier AI models. "With model performance converging at the frontier, AI adopters with limited resources to build advanced models themselves, especially in low- and middle-income countries, may prioritize affordable and dependable access to enable industrial upgrading and other productivity gains," they wrote. And it's not just the developing world. "US companies, ranging from established large tech companies to some of the most hyped AI startups, are widely adopting Chinese open-weight models," they observed. "The existence of open-weight Chinese models at the good-enough level may thus decrease global actors' reliance on US companies providing models through APIs." There are numerous caveats to the increased Chinese preeminence. The open-weight models still may not provide enough transparency to alleviate many concerns about the Chinese government's involvement in their development. While open-weight models can be run on any computer of sufficient power, many users, noted HAI, "will use the apps, APIs, and integrated solutions offered by DeepSeek, Alibaba, and others." Also: The best free AI for coding - only 3 make the cut now As a result, "This typically means user data is under the control of these companies and may physically travel to China, potentially exposing information to legal or extralegal access by the Chinese government or corporate competitors." And, they emphasized, it appears that Chinese developers, such as DeepSeek, have fewer concerns about guardrails and other "responsible AI" parameters. "An evaluation by CAISI, the US government's AI testing center, found that DeepSeek models, on average, were 12 times more susceptible to jailbreaking attacks than comparable US models," they wrote. "Other independent evaluations conducted by safety researchers also demonstrate that DeepSeek's guardrails can easily be bypassed." Those concerns mean it is uncharted territory for China's ultimate influence. However, the report aligns with comments from seasoned observers who view the rise of China and the decline of AI benchmark gains as a sign that the preeminence of US commercial firms is waning. Also: Is OpenAI doomed? Open-source models may crush it, warns expert As AI scholar Kai-Fu Lee observed earlier this year, large language models are now commodities, making OpenAI's business model vulnerable to the economics of open-source AI such as DeepSeek. More broadly, the report offers compelling evidence that China's role in global AI will persist, and that the role of the West in governing the technology may be less in the years to come than it was when OpenAI's ChatGPT dominated the headlines.
[2]
Emerge's 2025 Story of the Year: How the AI Race Fractured the Global Tech Order - Decrypt
>>>> gd2md-html alert: inline image link in generated source and store images to your server. NOTE: Images in exported zip file from Google Docs may not appear in the same order as they do in your doc. Please check the images! -----> The great unraveling began with a single number: $256,000. DeepSeek, a year-old Chinese startup, claimed it spent that relatively small sum training an AI model that matched the capabilities of OpenAI -- which spent over a hundred million dollars to get to the same place. When the app hit Apple's store in January, Nvidia lost $600 billion in a single trading day, which was the largest one-day wipeout in market history. The technical feat aside, DeepSeek's efficiency breakthrough quickly ignited a global contest far beyond benchmarks or code. Nvidia's China market share had collapsed from 95% to zero. Beijing banned all foreign AI chips from government data centers. The Pentagon signed $10 billion in AI defense contracts. And the world's two largest economies had split the technology stack into warring camps, from silicon to software to standards. The AI war of 2025 was redrawing the map of global power. DeepSeek's breakthrough exposed a strategic miscalculation that had defined American AI policy for years: the belief that controlling advanced chips would permanently cripple China's ambitions. The company trained its R1 model using older H800 GPUs -- chips that fell below export control thresholds -- proving that algorithmic efficiency could compensate for hardware disadvantages. "DeepSeek R1 is one of the most amazing and impressive breakthroughs I've ever seen -- and as open source, a profound gift to the world," venture capitalist Marc Andreessen posted on X after testing it. The AI market entered panic mode. Stocks tanked, politicians started polishing their patriotic speeches, analysis exposed the intricacies of what could end up in a bubble, and enthusiasts mocked American models that cost orders of magnitude more than the Chinese counterparts, which were free, cheap and required a fraction of the money and resources to train. Washington's response was swift and punishing. The Trump administration expanded export controls throughout the year, banning even downgraded chips designed specifically for the Chinese market. By April, Trump restricted Nvidia from shipping its H20 chips. "While the Nvidia news is concerning, it's not a shock as we are in the middle of a trade war between the US and China and expect more punches thrown by both sides," Dan Ives, global head of technology research at Wedbush Secruities, told CNN. The tit-for-tat escalated into full decoupling. A new China's directive issued in September banned Nvidia, AMD, and Intel chips from any data center receiving government money -- a market worth over $100 billion since 2021. Jensen Huang revealed the company's market share in China had hit "zero, compared to 95% in 2022." "At the moment, we are 100% out of China," Huang said. "I can't imagine any policymaker thinking that's a good idea -- that whatever policy we implemented caused America to lose one of the largest markets in the world to zero." He called U.S. policy "a mistake" that would backfire by accelerating Chinese chip independence. He was right. Huawei and domestic players like Cambricon now dominate China's AI infrastructure. By year's end, analysts projected Chinese chipmakers would capture 40% of the domestic AI server market -- a stunning reversal from near-total American dominance just three years earlier. But the semiconductor war was only the surface. Beneath it, America deployed its most potent weapon: control over the economics of the global market, setting up an "AI Action Plan" in July and a policy of tariffs and sanctions aimed at cementing its political and financial dominance. In response, China began to exert control over the physical elements that make modern technology possible. In October, Beijing announced the strictest rare earth export controls in its history. The new restrictions didn't just limit sales -- they applied the Foreign Direct Product Rule to rare earths for the first time, meaning even products made outside China using Chinese rare earth technology would require export licenses. Companies with any affiliation to foreign militaries would be automatically denied. The target was unmistakable: America's defense industrial base. China controls 94% of permanent magnet production and 90% of rare earth refining -- the elements essential for F-35 fighter jets, Tomahawk missiles, and the AI chips that power autonomous weapons. The expanded controls covered holmium, erbium, thulium, europium, and ytterbium, each critical to defense systems. The U.S. wasn't caught flat-footed. In July, the Pentagon had invested $400 million in MP Materials, America's only rare earth miner, becoming its largest shareholder. The deal included a 10-year price floor of $110 per kilogram -- nearly double the market rate -- to protect domestic production from Chinese price dumping. But even with this investment, MP Materials would produce just 1,000 tons of neodymium-boron-iron magnets by year's end -- less than 1% of China's 138,000-ton output. "It's scandalous that we don't have a rare earths strategic reserve," University of Pennsylvania finance professor Jeremy Siegel told CNBC. The supply chain warfare had exposed a vulnerability more fundamental than chip design: America's military depended on adversary-controlled minerals to function. The AI battle of the titans is continuing. In late November, Trump signed an executive order launching the Genesis Mission -- a Department of Energy-led AI initiative the White House compared in "urgency and ambition" to the Manhattan Project. The Genesis Mission aims to build an integrated AI platform that would harness decades of federal scientific datasets to train "scientific foundation models" and deploy AI agents for autonomous research and discovery. Its goals range from nuclear fusion to advanced manufacturing to semiconductor development, with the platform designed to give American researchers access to supercomputing resources and proprietary datasets no Chinese lab could match. But whether the $400 million already invested in rare earth mining or the Genesis Mission's national laboratory network can offset China's manufacturing dominance remains unclear -- but Washington is now treating AI supremacy as a matter of wartime urgency. It turns out that China's advancements were the product of a Military-Civil Fusion Strategy, as the Marine Corps call it. Under Xi Jinping's oversight, China started to create an integrated ecosystem where nearly every technological advance can also serve military purposes. PLA strategists envision AI transforming not just weapons but warfare itself. Large language models would conduct cognitive operations, manipulating adversary perceptions and decision-making. Swarms of obsolete fighters converted to autonomous drones would overwhelm defenses through sheer scale. The goal: transition to "intelligentized warfare" where speed of decision-making -- measured in milliseconds -- determines victory. In the west, Silicon Valley's relationship with the Pentagon also underwent its own revolution. Tech giants that once banned military work now competed for defense contracts worth hundreds of billions. The whole trend started back in December 2024, when Palantir and Anduril announced a consortium to build AI infrastructure for the military. OpenAI, which had prohibited weapons applications, reversed course and signed defense partnerships with the Pentagon. Google, which abandoned Project Maven in 2018 after employee protests, quietly returned with a $200 million Pentagon contract in July. Anthropic also started to adopt a more anti-China political stance, urging governments to intervene to hinder China's advances and securing a western dominance of the industry. While hardware wars raged and militaries mobilized, American companies staged a comeback in AI's most visible consumer domain: video generation. OpenAI's Sora 2, released in September, set new standards with synchronized audio, 4K resolution, and multi-shot storytelling. Google's Veo 3 and its 3.1 update followed, leveraging unrestricted access to H100 and H200 chips that Chinese competitors couldn't obtain. Just a few months earlier, China's Kuaishou and other firms had led text-to-video development. Now American firms dominated, proving that in compute-intensive domains, hardware access remained decisive. The American resurgence extended beyond video to the foundational models themselves. In November, Anthropic released Claude Opus 4.5 -- what the company called "the best model in the world for coding, agents, and computer use." The model became the first to break 80% on SWE-bench Verified, a benchmark measuring real-world software engineering capabilities, outperforming both OpenAI's GPT-5.1 and Google's Gemini 3 Pro. Anthropic claimed the model scored higher on its internal engineering tests than any human job candidate ever had. For an industry that had spent January panicking over DeepSeek's efficiency breakthrough, Claude Opus 4.5 served as a reminder that American labs still held the performance crown -- at least for now. 2025 was also a great year for open-source models. In fact, one could argue that this was the year open-source AI caught up -- and, again, it involved its own dose of the good old China vs America drama. Alibaba's Qwen family alone accounted for 40% of new language models uploaded monthly to Hugging Face, spawning over 100,000 derivatives and 600 million downloads. The open approach built soft power that export controls couldn't touch -- developers worldwide could run Chinese models without restriction, creating a parallel ecosystem independent of American infrastructure. And how can you prevent a global adoption of free open source models? With regulations. China's DeepSeek faced bans across dozens of countries. Italy moved first in January, blocking the app over data privacy violations. Taiwan, Australia, South Korea, and multiple U.S. states and agencies followed. By July, NATO allies including the Czech Republic branded DeepSeek a "Trojan horse" for Beijing's intelligence services. Meta's Llama used to be the most popular LLM in the community; the fourth generation was released this year. OpenAI also released GPT-oss, its only open source model in years. Besides that, the open-source LLM community didn't see much hype in America. Ai2 released a family of models trained in America from scratch, and other companies, including Perplexity, fine-tuned DeepSeek to make it more pro-US and anti-China in its responses. Not everything is rivalry, though, when developers leave geopolitical fights aside and work towards common goals, they come up with nice products. The models developed by Nous Research -- a research team from America, China, Europe and the UAE -- is a good example of that. Besides the China-US Cold-War 2.0, other governments also started to get more involved in AI, setting it as a key element of their public agendas. Saudi Arabia and the UAE pledged $2 trillion in AI investments during Trump's May visit -- money dwarfing American hyperscaler spending. Saudi Arabia's $600 billion commitment included partnerships with Nvidia, AMD, Google Cloud, and AWS to build 2,200 megawatts of data center capacity -- more than four times the UAE's 500 megawatts. Both nations walked a tightrope, wanting American chips and expertise while maintaining deep China ties through Huawei-built telecommunications infrastructure. Washington demanded they choose sides, imposing strict controls to ensure AI hardware didn't reach Beijing or Moscow. Europe too mounted its own sovereignty bid. The European Commission unveiled a €200 billion InvestAI initiative in April, targeting AI gigafactories and data infrastructure to reduce dependence on American and Chinese technology. But by year's end, the €200 billion remained largely aspirational. Europe allocated just 18% of its €252 billion in venture capital to AI between 2020-2025, compared to America's 34% of $1.33 trillion. The fracturing carries profound implications. Trade patterns are realigning. Military doctrines are being rewritten around AI-enabled warfare. Developing nations face pressure to choose between eastern and western standards -- decisions that will shape their digital governance for decades. China's bet on open-source democratization versus America's proprietary model represents competing visions of technological power. Beijing seeks influence through freely available tools that create dependencies subtler than export controls. Washington relies on maintaining leads in frontier capabilities and controlling access to the most powerful systems. Neither strategy guarantees victory. China's open models gain adoption but sacrifice economic returns that fund development. America's closed systems generate revenue but risk irrelevance if developers migrate to unrestricted alternatives. Europe... is being Europe. What began as a trade dispute over AI chips has metastasized into full-spectrum competition encompassing technology, ideology, resources, and military doctrine. China weaponized its rare earth monopoly. America mobilized its defense industrial base. Both nations fused civilian innovation with military applications in a race toward AI-enabled warfare that has no precedent in history. The silicon iron curtain that descended in 2025 may prove as consequential as the one that divided Europe for half a century. Only this time, the fault line runs through every smartphone, data center, autonomous system, and permanent magnet that powers modern civilization. The great divide has begun.
[3]
As US battles China on AI, some companies choose Chinese
New York (AFP) - Even as the United States is embarked on a bitter rivalry with China over the deployment of artificial intelligence, Chinese technology is quietly making inroads into the US market. Despite considerable geopolitical tensions, Chinese open-source AI models are winning over a growing number of programmers and companies in the United States. These are different from the closed generative AI models that have become household names -- ChatGPT-maker OpenAI or Google's Gemini - whose inner workings are fiercely protected. In contrast, "open" models offered by many Chinese rivals, from Alibaba to DeepSeek, allow programmers to customize parts of the software to suit their needs. Globally, use of Chinese-developed open models has surged from just 1.2 percent in late 2024 to nearly 30 percent in August, according to a report published this month by the developers' platform OpenRouter and US venture capital firm Andreessen Horowitz. China's open-source models "are cheap -- in some cases free -- and they work well," Wang Wen, dean of the Chongyang Institute for Financial Studies at Renmin University of China told AFP. One American entrepreneur, speaking on condition of anonymity, said their business saves $400,000 annually by using Alibaba's Qwen AI models instead of the proprietary models. "If you need cutting-edge capabilities, you go back to OpenAI, Anthropic or Google, but most applications don't need that," said the entrepreneur. US chip titan Nvidia, AI firm Perplexity and California's Stanford University are also using Qwen models in some of their work. DeepSeek shock The January launch of DeepSeek's high-performance, low-cost and open source "R1" large language model (LLM) defied the perception that the best AI tech had to be from US juggernauts like OpenAI, Anthropic or Google. It was also a reckoning for the United States -- locked in a battle for dominance in AI tech with China -- on how far its archrival had come. AI models from China's MiniMax and Z.ai are also popular overseas, and the country has entered the race to build AI agents -- programs that use chatbots to complete online tasks like buying tickets or adding events to a calendar. Agent friendly -- and open-source -- models, like the latest version of the Kimi K2 model from the startup Moonshot AI, released in November, are widely considered the next frontier in the generative AI revolution. The US government is aware of open-source's potential. In July, the Trump administration released an "AI Action Plan" that said America needed "leading open models founded on American values". These could become global standards, it said. But so far US companies are taking the opposite track. Meta, which had led the country's open-source efforts with its Llama models, is now concentrating on closed-source AI instead. However, this summer, OpenAI -- under pressure to revive the spirit of its origin as a nonprofit -- released two "open-weight" models (slightly less malleable than "open-source"). 'Build trust' Among major Western companies, only France's Mistral is sticking with open-source, but it ranks far behind DeepSeek and Qwen in usage rankings. Western open-source offerings are "just not as interesting," said the US entrepreneur who uses Alibaba's Qwen. The Chinese government has encouraged open-source AI technology, despite questions over its profitability. Mark Barton, chief technology officer at OMNIUX, said he was considering using Qwen but some of his clients could be uncomfortable with the idea of interacting with Chinese-made AI, even for specific tasks. Given the current US administration's stance on Chinese tech companies, risks remain, he told AFP. "We wouldn't want to go all-in with one specific model provider, especially one that's maybe not aligned with Western ideas," said Barton. "If Alibaba were to get sanctioned or usage was effectively blacklisted, we don't want to get caught in that trap." But Paul Triolo, a partner at DGA-Albright Stonebridge Group, said there were no "salient issues" surrounding data security. "Companies can choose to use the models and build on them...without any connection to China," he explained. A recent Stanford study published posited that "the very nature of open-model releases enables better scrutiny" of the tech. Gao Fei, chief technology officer at Chinese AI wellness platform BOK Health, agrees. "The transparency and sharing nature of open source are themselves the best ways to build trust," he said.
Share
Share
Copy Link
Chinese AI models from Alibaba, DeepSeek, and others have achieved near-parity with leading US systems, according to a Stanford report. The open-source AI models approach is driving widespread global adoption, particularly in developing nations, as US companies shift toward closed systems. This development marks a significant shift in the AI race and US-China rivalry over technology dominance.
Chinese AI models have reached a statistical dead heat with leading US artificial intelligence systems, marking a dramatic shift in the global AI race. According to a Stanford University Human-Centered AI institute report titled "Beyond DeepSeek: China's Diverse Open-Weight AI Ecosystem and its Policy Implications," Chinese large language models (LLMs) like Alibaba's Qwen family now perform at near-state-of-the-art levels across major benchmarks
1
. The report shows Qwen models are statistically tied with Anthropic's Claude and within close range of OpenAI and Google's best offerings.The acceleration comes as DeepSeek shocked the industry in January by claiming it spent just $256,000 training an AI model that matched capabilities OpenAI achieved with over a hundred million dollars
2
. This AI model training efficiency breakthrough exposed a critical miscalculation in US strategy: the belief that controlling advanced AI chips would permanently limit China's ambitions. DeepSeek trained its R1 model using older H800 GPUs that fell below US export controls on AI chips thresholds, proving algorithmic efficiency could compensate for hardware disadvantages.
Source: France 24
The rise of Chinese open-source AI models is fueling what Stanford researchers call a "global diffusion" movement. Use of Chinese-developed open models surged from just 1.2 percent in late 2024 to nearly 30 percent by August 2025, according to OpenRouter and Andreessen Horowitz data
3
. Countries worldwide, especially developing nations, are adopting these models as cost-effective alternatives to building AI infrastructure from scratch.The cost-effectiveness of AI from Chinese developers is compelling. One American entrepreneur reported saving $400,000 annually by using Alibaba's Qwen models instead of proprietary systems
3
. Major US organizations including Nvidia, AI firm Perplexity, and Stanford University now use Qwen models in their work. By September 2025, Chinese fine-tuned models comprised 63% of all new derivative models released on HuggingFace, and Qwen surpassed Meta's Llama to become the most downloaded LLM family on the platform1
.The AI race has fractured the global tech order through escalating restrictions and retaliation. After DeepSeek's breakthrough, Nvidia lost $600 billion in a single trading day, the largest one-day market wipeout in history
2
. The Trump administration expanded US export controls on AI chips throughout 2025, banning even downgraded chips designed for Chinese markets. By April, restrictions prevented Nvidia from shipping H20 chips to China.China responded with its own measures. A September directive banned Nvidia, AMD, and Intel chips from any data center receiving government funding, a market worth over $100 billion since 2021
2
. Nvidia CEO Jensen Huang revealed the company's China market share collapsed from 95% in 2022 to "zero" by 2025. "I can't imagine any policymaker thinking that's a good idea," Huang stated, calling US policy "a mistake" that accelerated Chinese chip independence. Analysts projected Chinese chipmakers would capture 40% of the domestic AI server market by year's end.
Source: ZDNet
Related Stories
The competitive landscape shifted as Meta, the previous leader in open-source AI, slipped in rankings and moved toward the closed-source approach of OpenAI, Google, and Anthropic
1
. This leaves Chinese companies dominating open-weight model development. The top 22 Chinese open models all outperform OpenAI's own "open-weight" model, GPT-oss, according to the Stanford HAI report."Leadership in AI now depends not only on proprietary systems but on the reach, adoption, and normative influence of open-weight models worldwide," wrote Caroline Meinhardt, the report's lead author
1
. The widespread global adoption of Chinese models may reshape technology access patterns and impact AI governance, safety, and competition. While the Trump administration's July AI Action Plan stated America needed "leading open models founded on American values," US companies are moving in the opposite direction, with only France's Mistral maintaining a significant Western open-source presence.Despite geopolitical tensions, data security concerns appear manageable. "Companies can choose to use the models and build on them without any connection to China," explained Paul Triolo of DGA-Albright Stonebridge Group
3
. The Stanford report noted that "the very nature of open-model releases enables better scrutiny" of the technology. As Chinese models continue gaining technical proficiency and market share, the question shifts from whether they can compete to how their dominance in open-source will shape global AI standards and governance frameworks.Summarized by
Navi
[2]
1
Policy and Regulation

2
Technology

3
Technology
