3 Sources
[1]
One of the world's largest mobile networks will train its trillion-parameter strong LLM on Huawei's AI chips as Nvidia, AMD are sidelined
China Telecom turns to domestic chipmakers to circumvent US export restrictions Chinese state-owned carrier China Telecom has announced the development of two LLMs trained entirely on domestically produced chips. In a statement from the Institute of AI at China Telecom, published on WeChat and reported by the South China Morning Post, its open-source TeleChat2-115B, which has over 100 billion parameters, and a second unnamed model, which reportedly has 1 trillion parameters, were trained using tens of thousands of locally manufactured chips. The statement claims that this development "indicates that China has truly realized total self-sufficiency in domestic LLM training," a challenging goal for the country since the US imposed strict export regulations that block access to high-end GPUs like the Nvidia H100 and A100. While China Telecom hasn't specified who supplied the chips used to train its LLMs, it's likely that Huawei provided the majority, if not all of them. The company has been positioning itself as a domestic alternative to Nvidia, and the South China Morning Post notes that China Telecom "previously disclosed that it is developing LLM technology using Ascend chips developed by the Shenzhen-based telecom equipment giant." Huawei has recently begun sending samples of its new Ascend 910C processor to Chinese server and telecom companies for testing, and it has been targeting major Nvidia customers in China in the hopes of getting them to switch at least some of their business. Although there is a thriving black market in China for Nvidia's high-end GPUs, many companies, including ByteDance and Alibaba, prefer to stay compliant and use lower-spec, permitted GPUs like Nvidia's H20 to avoid legal and reputational risks and to maintain access to Nvidia's support. These companies are increasingly turning to Huawei for their AI needs. It was recently reported that TikTok owner ByteDance had put in an order for 100,000 Ascend processors. The South China Morning Post also reports that, in addition to Huawei, China Telecom is exploring hardware from Cambricon, a local AI chip start-up, to further diversify its chip supply.
[2]
State-owned China Telecom has trained domestic AI LLMs using homegrown chips -- one model reportedly uses 1 trillion parameters
China Telecom, one of the largest wireless carriers in mainland China, says that it has developed two large language models (LLMs) relying solely on domestically manufactured AI chips. The state-owned company did not disclose the chips that it used for training the LLMs, but the South China Morning Post said that China Telecom has previously announced that it will be using Huawei Ascend AI chips for LLM training. So, it makes sense for the telecom giant to use Huawei's processors for its groundbreaking AI. If the information is accurate, this is a crucial milestone in China's attempt at becoming independent of other countries for its semiconductor needs, especially as the U.S. is increasingly tightening and banning the supply of the latest, highest-end chips for Beijing in the U.S.-China chip war. Huawei, which has mostly been banned from the U.S. and other allied countries, is one of the leaders in China's local chip industry. It has been working hard to develop its AI chip with its Ascend line. It currently offers the Huawei Ascend 910B, although recent reports say that the company is now sending samples of its successor, the Ascend 910C, to customers for testing. Aside from this, Beijing is increasingly urging companies to stay away from Nvidia's AI chips and buy locally instead. This comes when there are rumors that the U.S. plans to sanction the Nvidia H20, the company's most potent accelerator, which accelerator that has so far been compliant with Washington's bans. So, Huawei is in a position to take over the vacuum that Nvidia will leave if it cannot create GPUs that will satisfy China's demands and America's regulations. If China Telecom's LLMs were indeed fully trained using Huawei chips alone, then this would be a massive success for Huawei and the Chinese government. After all, the former said that it would continue making progress in AI chips despite all the bans and sanctions that the U.S. has applied to the East Asian country. It would also show that Beijing's investments in semiconductor technology are bearing fruit, proving Xi Jin Ping's announcement -- that China does not need ASML to progress -- true.
[3]
China trains 100-billion-parameter AI model on local tech
Research institute seems to have found Huawei to do it - perhaps with Arm cores China Telcom's AI Research Institute claims it trained a 100-billion-parameter model using only domestically produced computing power - a feat that suggests Middle Kingdom entities aren't colossally perturbed by sanctions that stifle exports of Western tech to the country. The model is called TeleChat2-115B and, according to a GitHub update posted on September 20, was "trained entirely with domestic computing power and open sourced." "The open source TeleChat2-115B model is trained using 10 trillion tokens of high-quality Chinese and English corpus," the project's GitHub page states. The page also contains a hint about how China Telecom may have trained the model, in a mention of compatibility with the "Ascend Atlas 800T A2 training server" - a Huawei product listed as supporting the Kunpeng 920 7265 or Kunpeng 920 5250 processors, respectively running 64 cores at 3.0GHz and 48 cores at 2.6GHz. Huawei builds those processors using the Arm 8.2 architecture and bills them as produced with a 7nm process. At 100 billion parameters, TeleChat2 trails the likes of recent Llama models that apparently top 400 billion parameters, or Open AI's o1 which has been guesstimated to have been trained with 200 billion parameters. While paramater count alone doesn't determine a model's power or utility, the low-ish paramater count suggests training TeleChat2 would likley have required less computing power than was needed for other projects. Which may be why we can't find a mention of a GPU - although the Ascend training server has a very modest one to drive a display at 1920 × 1080 at 60Hz with 16 million colors. It therefore looks like the infrastructure used to train this model was not at parity with the kind of rigs available outside China, suggesting that tech export sanctions aren't preventing the Middle Kingdom from pursuing its AI ambitions. Or that it can deliver in other ways, such as China Telecom's enormous scale. The carrier has revenue of over $70 billion, drawn from its provision of over half a billion wired and wireless subscriptions. It's also one of the biggest users and promoters of OpenStack. Even without access to the latest and greatest AI hardware, China Telecom can muster plenty of power. ®
Share
Copy Link
China Telecom announces the development of two large language models trained entirely on domestically produced chips, potentially using Huawei's Ascend AI processors. This move demonstrates China's progress in achieving technological self-sufficiency amid US export restrictions.
China Telecom, one of the world's largest mobile networks, has announced a significant advancement in artificial intelligence by developing two large language models (LLMs) trained entirely on domestically produced chips 1. This move marks a crucial milestone in China's pursuit of technological self-sufficiency, especially in the face of US export restrictions on high-end GPUs.
The state-owned carrier has developed two notable models:
These models were trained using tens of thousands of locally manufactured chips, demonstrating China's capability to develop advanced AI technologies without relying on Western hardware 2.
While China Telecom hasn't explicitly named its chip supplier, it's highly likely that Huawei provided the majority, if not all, of the chips used in training these LLMs 1. Huawei has been positioning itself as a domestic alternative to Nvidia, with its Ascend line of AI processors gaining traction in the Chinese market.
The GitHub page for TeleChat2-115B mentions compatibility with the "Ascend Atlas 800T A2 training server," a Huawei product that supports Kunpeng 920 processors based on the Arm 8.2 architecture 3. While the parameter count of TeleChat2-115B (100 billion) is lower than some Western models, it still represents a significant achievement given the hardware limitations.
This development has several implications for the ongoing technological competition between the US and China:
As China continues to invest heavily in its semiconductor industry and AI research, we may see further advancements in domestically produced AI chips and models. This could potentially reshape the global AI landscape and intensify the technological rivalry between the US and China.
Nvidia prepares to release its Q1 earnings amid high expectations driven by AI demand, while facing challenges from China export restrictions and market competition.
4 Sources
Business and Economy
15 hrs ago
4 Sources
Business and Economy
15 hrs ago
OpenAI has updated its Operator AI agent with the more advanced o3 model, improving its reasoning capabilities, task performance, and safety measures. This upgrade marks a significant step in the development of autonomous AI agents.
4 Sources
Technology
23 hrs ago
4 Sources
Technology
23 hrs ago
Nvidia CEO Jensen Huang lauds President Trump's re-industrialization policies as 'visionary' while announcing a partnership to develop AI infrastructure in Sweden with companies like Ericsson and AstraZeneca.
4 Sources
Business and Economy
15 hrs ago
4 Sources
Business and Economy
15 hrs ago
Wall Street anticipates Nvidia's earnings report as concerns over rising Treasury yields and federal deficits impact the market. The report is expected to reflect significant growth in AI-related revenue and could reignite enthusiasm for AI investments.
2 Sources
Business and Economy
23 hrs ago
2 Sources
Business and Economy
23 hrs ago
The US House of Representatives has approved President Trump's "One Big Beautiful Bill," which includes a contentious provision to freeze state-level AI regulations for a decade, sparking debate over innovation, safety, and federal-state power balance.
2 Sources
Policy and Regulation
23 hrs ago
2 Sources
Policy and Regulation
23 hrs ago