Z.ai trains GLM-Image entirely on Huawei hardware, challenging Nvidia's AI chip dominance

2 Sources

Share

Chinese AI firm Z.ai has released GLM-Image, an open-source image generation AI model trained completely on Huawei's Ascend chips without any Nvidia GPUs. The 16-billion parameter model demonstrates accurate text rendering and complex prompt understanding, proving that domestic Chinese hardware can now sustain large-scale AI training independently.

Z.ai Achieves Breakthrough Training GLM-Image on Huawei Hardware

Chinese AI company Z.ai has announced GLM-Image, a new image generation AI model trained entirely on domestic Chinese hardware, marking a significant milestone in the generative AI hardware market

1

. The company, which operates under the name Zhipu AI, claims this is the first advanced model built completely without relying on Nvidia or AMD chips. GLM-Image was developed using the Ascend Atlas 800T A2, a Huawei server equipped with Kunpeng 920 processors featuring either 64 or 48 Arm cores, alongside Huawei's Ascend 910 AI processors

1

. This achievement directly challenges Nvidia dominance in an industry where access to H100 chip clusters has been considered essential for building state-of-the-art models.

Source: Digit

Source: Digit

Breaking the Silicon Valley Hardware Monopoly

For years, a fundamental assumption has governed generative AI development: cutting-edge models require thousands of Nvidia GPUs, particularly H100s

2

. GLM-Image shatters this paradigm by demonstrating that Huawei's Ascend chips can sustain the grueling stability required for long-term large model training. The most recent Ascend 910C, released in 2025, delivers approximately 800 TFLOPS of computing power per card at FP16 precision—roughly 80% of the H100's performance

1

. While Z.ai hasn't disclosed how many servers or accelerators were used or the training duration, the successful deployment of this 16-billion parameter model signals a maturation in alternative AI infrastructure

2

. This development carries particular weight given the strict export controls announced by Washington, which now require assessment of every GPU application to Chinese buyers

1

.

Hybrid Architecture Delivers Accurate Text Rendering

GLM-Image distinguishes itself from standard diffusion model competitors like FLUX.1 and Stable Diffusion through its innovative hybrid architecture

2

. The model employs an "autoregressive + diffusion decoder" design that decouples understanding from generation

1

. This two-part approach allows the model to comprehend complex prompts before rendering images, resulting in superior adherence to instructions. A standout feature is Glyph-byT5, a specialized character-level encoder that tackles the notorious problem of "AI gibberish" in generated text

2

. Instead of treating text as mere shapes, this innovation enables remarkably accurate text rendering, even for complex scripts like Chinese characters, outperforming models like FLUX.1 and SD3.5 in text-rendering benchmarks on the CVTG-2k dataset

2

.

Implications for AI Self-Sufficiency and Global Competition

The release of GLM-Image as an open-source model has far-reaching implications beyond technical achievements. Z.ai completed the entire process from data preprocessing to large-scale training on the Atlas server, proving "the feasibility of training cutting-edge models on a domestically produced full-stack computing platform"

1

. This matters particularly as industry experts predict future models will increasingly be smaller, domain-specific affairs. If China can now produce such models without hardware from Nvidia or AMD, it represents a direct threat to those chip designers' future revenue streams

1

. However, questions remain about whether Huawei's Ascend chips trained the model at speeds and costs competitive enough to genuinely disrupt the global AI hardware landscape. Think tank ASPI has noted concerns that China uses AI to export its culture and values, recommending nations "prevent China's AI models, governance norms and industrial policies from shaping global technology ecosystems"

1

. As the generative AI hardware market evolves, GLM-Image serves as proof that the Nvidia monopoly may be a market preference rather than a technical necessity, suggesting the future of AI infrastructure will be far more competitive than Silicon Valley anticipated

2

.

Source: The Register

Source: The Register

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo