16 Sources
16 Sources
[1]
Qualcomm is turning parts from cellphone chips into AI chips to rival Nvidia
Qualcomm is launching a pair of new AI chips in an attempt to challenge Nvidia's dominance in the market. On Monday, Qualcomm announced plans to release its new AI200 chip next year, followed by the AI250 in 2027 -- both of which are built on the company's mobile neural processing technology. The new chips are built for deploying AI models, rather than training them. The launch marks a notable change for Qualcomm, which has primarily made processors for mobile phones, laptops, tablets, and telecommunications equipment. As reported by CNBC, Qualcomm's AI-focused processors are based on the company's Hexagon neural processing units that power AI features in its chips for mobile devices and laptops. CNBC reports that Qualcomm's AI processors can work inside a rack with up to 72 chips functioning as a single computer, similar to Nvidia and AMD's GPUs. Qualcomm's A1200 chip features 768GB of RAM, along with performance optimized for AI inference. Meanwhile, the A1250 will come with "a generational leap in efficiency" that the company claims will allow for "much lower power consumption." Humain, the AI company that exists under Saudi Arabia's Public Investment Fund, has already announced plans to use the AI200 and AI250 to power computing systems, building on a partnership to develop AI datacenters across Saudi Arabia.
[2]
Qualcomm unveils AI200 and AI250 AI inference accelerators -- Hexagon takes on AMD and Nvidia in the booming data center realm
Qualcomm on Monday formally announced two upcoming AI inference accelerators -- the AI200 and AI250 -- that will hit the market in 2026 and 2027. The new accelerators are said to compete against rack-scale solutions from AMD and Nvidia with improved efficiency and lower operational costs when running large-scale generative AI workloads. The announcement also reaffirms Qualcomm's plan to release updated products on a yearly cadence. Both Qualcomm AI200 and AI250 accelerators are based on Qualcomm Hexagon neural processing units (NPUs) customized for data center AI workloads. The company has been gradually improving its Hexagon NPUs in the recent years, so the latest versions of these processors already feature scalar, vector, and tensor accelerators (in a 12+8+1 configuration), support such data formats as INT2, INT4, INT8, INT16, FP8, FP16, micro-tile inferencing to reduce memory traffic, 64-bit memory addressing, virtualization, and Gen AI model encryption for extra security. Scaling Hexagon for data center workloads is a natural choice for Qualcomm, though it remains to be seen what performance targets the company will set for its AI200 and AI250 units. Qualcomm's AI200 rack-scale solutions will be the company's first data-center-grade inference system powered by AI200 accelerators with 768 GB of LPDDR memory onboard (which is a lot of memory for an inference accelerator) that will use PCIe interconnects for scale-up and Ethernet for scale-out scalability. The system will use direct liquid cooling and a power envelope of 160 kW per rack, which is also an unprecedented power consumption for inference solutions. In addition, the system will support confidential computing for enterprise deployments. The solution will be available in 2026. The AI250, launching a year later, keeps this structure but adds a near-memory compute architecture to boost effective memory bandwidth by over 10 times. In addition, the system will support disaggregated inference capability that enables compute and memory resources to be dynamically shared across cards. Qualcomm positions it as a more efficient, high-bandwidth solution optimized for large transformer models, while preserving the same thermal, cooling, security, and scalability characteristics as the AI200. "With Qualcomm AI200 and AI250, we're redefining what's possible for rack-scale AI inference," said Durga Malladi, SVP & GM, Technology Planning, Edge Solutions & Data Center, Qualcomm Technologies. "These innovative new AI infrastructure solutions empower customers to deploy generative AI at unprecedented TCO, while maintaining the flexibility and security modern data centers demand." In addition to building hardware platforms, Qualcomm is also building a hyperscaler-grade, end-to-end software platform optimized for large-scale inference. The platform is set to support major ML and generative AI toolsets -- including PyTorch, ONNX, vLLM, LangChain, and CrewAI to while enabling seamless model deployment. The software stack will support disaggregated serving, confidential computing, and one-click onboarding of pre-trained models to simplify deployment. "Our rich software stack and open ecosystem support make it easier than ever for developers and enterprises to integrate, manage, and scale already trained AI models on our optimized AI inference solutions," said Malladi. "With seamless compatibility for leading AI frameworks and one-click model deployment, Qualcomm AI200 and AI250 are designed for frictionless adoption and rapid innovation." One crucial aspect about its AI200 and AI250 rack-scale solutions for inference that Qualcomm did not disclose is which processors these machines will run. The company formally began development of its own data center-grade CPUs earlier this year. While some CPU microarchitecture groundwork has probably been done by the Nuvia team before that, it is still going to take about a year to define and develop logical design, then at least six months to implement the design and tape it out, then months to bring the chip up and sample it. In short, it is reasonable to expect Qualcomm's in-house CPUs to emerge in late 2027, but rather in 2028. That said, at least the AI200 is poised to use an off-the-shelf Arm or x86 CPU, so the question is, which one?
[3]
Qualcomm shares jump as it launches new AI chip to rival Nvidia
Qualcomm's shares surged as much as 20 per cent on Monday as the US chip company launched its first data-centre processors for artificial intelligence, seeking to grab a share of a multibillion-dollar market dominated by Nvidia. Saudi Arabia's Humain, an AI company backed by the kingdom's Public Investment Fund, will be Qualcomm's first customer, as they build on a partnership announced in May. Qualcomm's stock, which was trading around 12 per cent higher by midday in New York, made its biggest one-day move for several years in response to the news, adding tens of billions of dollars to push its market capitalisation above $200bn. Investors see the chip giant, which is best known for its smartphone processors, moving to diversify its products and benefit from the AI infrastructure boom, including a recent surge in investment in "sovereign AI". So far, Nvidia has been the primary beneficiary of the AI boom, with analysts estimating that it holds more than three quarters of the market for the specialised processors needed to train and run large language models, the systems behind OpenAI's ChatGPT and Google's Gemini. Qualcomm's launch is the latest example of how rivals are trying to narrow that lead, after OpenAI struck AI chip deals with AMD and Broadcom in recent weeks. Humain plans to deploy 200 megawatts of Qualcomm's new AI accelerators starting in 2026, as the Gulf state seeks to position itself as a hub for artificial intelligence. "By establishing advanced AI data centres powered by Qualcomm's industry-leading inference solutions, we are helping the kingdom create a technology ecosystem that will accelerate its AI ambitions of becoming a hub of intelligent computing," said Qualcomm chief executive Cristiano Amon. The announcement follows a state visit to the Middle East by US President Donald Trump in May, when he was accompanied by US tech leaders including Amon. Qualcomm and Humain at the time struck a memorandum of understanding to deliver advanced AI data centres. Qualcomm's AI200 and AI250 chips will launch in 2026 and 2027 respectively, promising to help AI applications run faster. The products will be available in rack-scale, liquid-cooled formats, mirroring how Nvidia and its competitors have moved towards offering multiple chips connected up inside a server rack. Qualcomm says it is now committed to annual launches of AI chips, following a pattern established by Nvidia. The AI250 will also offer a new memory architecture that Qualcomm says will bring a "generational leap in efficiency and performance". Memory has emerged as one of the key constraints on the speed and capability of AI chips. Massive investment in data centres by Big Tech groups as well as nation states has catapulted Nvidia past a record $4tn valuation. But rivals such as AMD, as well as Nvidia's own customers including Amazon and OpenAI, are developing their own AI processors. Shares in UK-based Arm, whose chip blueprints are used by Qualcomm to design its chips, also rose around 4.5 per cent on Monday. Nvidia shares were up around 2.5 per cent.
[4]
Qualcomm announces AI chips to compete with AMD and Nvidia
The AI chips are a shift from Qualcomm, which has thus far focused on semiconductors for wireless connectivity and mobile devices, not massive data centers. Qualcomm said that both the AI200, which will go on sale in 2026, and the AI250, planned for 2027, can come in a system that fills up a full, liquid-cooled server rack. Qualcomm is matching Nvidia and AMD, which offer their graphics processing units, or GPUs, in full-rack systems that allow as many as 72 chips to act as one computer. AI labs need that computing power to run the most advanced models. Qualcomm's data center chips are based on the AI parts in Qualcomm's smartphone chips called Hexagon neural processing units, or NPUs. "We first wanted to prove ourselves in other domains, and once we built our strength over there, it was pretty easy for us to go up a notch into the data center level," Durga Malladi, Qualcomm's general manager for data center and edge, said on a call with reporters last week. The entry of Qualcomm into the data center world marks new competition in the fastest-growing market in technology: equipment for new AI-focused server farms. Nearly $6.7 trillion in capital expenditures will be spent on data centers through 2030, with the majority going to systems based around AI chips, according to a McKinsey estimate.
[5]
Qualcomm aims for a spot in the AI data center with new chips
Why it matters: Unprecedented demand has created a fresh opportunity for Nvidia's rivals -- if they can deliver sufficiently powerful chips. Driving the news: Qualcomm announced two chips: the AI200, due out next year; and a follow-up chip -- AI250 -- due the following year, with what Qualcomm promises is a ground-breaking new memory architecture. * "It improves the memory bandwidth by a factor of more than 10," Qualcomm senior vice president Durga Malladi said in a briefing with reporters. * While Qualcomm typically delivers only chips to customers, here it has also designed a rack-level system designed to plug directly into a data center. Of note: Saudi AI company Humain will be the first customer for the new chips, with plans to bring on 200 megawatts of compute starting in 2026. Between the lines: Qualcomm is talking up the energy efficiency of its chips, saying that they will be cheaper over the long term. * Earlier this year, the company touted a study showing AI compute on devices running its mobile chips dramatically decreased power consumption. Flashback: It's not Qualcomm's first effort to crack into the data center.
[6]
Qualcomm's AI200 turns up the heat on Nvidia -- and puts inference economics in the spotlight - SiliconANGLE
Qualcomm's AI200 turns up the heat on Nvidia -- and puts inference economics in the spotlight Qualcomm Inc. shares spiked as much as 20% early today after the company unveiled new data center artificial intelligence accelerators, the AI200 and AI250, aimed squarely at Nvidia Corp.'s inference stronghold with its graphics processing units. According to today's reporting, AI200 is slated to ship in 2026, with AI250 following in 2027, and both will come as standalone components or addβin cards that slot into existing servers. The move expands Qualcomm's strategy from AI PCs and edge devices into cloud and enterprise inference at scale -- a battleground where cost, power and software maturity decide winners. Here is my Breaking Analysis from the Cube Community in context from our reporting, interviews and research. Qualcomm is playing the right game at the right time. Inference is the AI profit center and it's increasingly heterogeneous. If AI200/AI250 deliver competitive latency, model density, and perfβperβwatt -- with a developerβfriendly stack -- Qualcomm can carve out meaningful share in a market that wants credible alternatives to GPUβonly designs. The company's history in lowβpower, Armβbased compute, its momentum in AI PCs, and prior AI100 deployments provide a foundation. The hurdle is software gravity and ecosystem depth, where Nvidia still sets the pace. Our bottom line: 2026-2027 will see an accelerated shakeout in inference silicon. Qualcomm's announcement signals it plans to be in that final round -- and enterprises should welcome the added optionality.
[7]
Qualcomm (QCOM) stock soars as company challenges Nvidia with new AI data center chips
Shares of Qualcomm (NASDAQ: QCOM) surged on Monday after the company announced a bold new strategy to enter the AI data center market, unveiling a series of chips designed to rival Nvidia and AMD for certain AI-related tasks. The news sent the Qualcomm stock price soaring, signaling a new chapter for the semiconductor giant as it pivots toward the high-stakes world of enterprise AI. This strategic move is a clear response to the unprecedented demand for AI computing power, creating a fresh opportunity for new players to challenge the market leaders. For investors watching QCOM stock, this represents a significant push beyond the company's traditional mobile chip business. On Monday, Qualcomm announced two new accelerator chips and a full rack-level system designed to plug directly into a data center. Both solutions are designed for generative AI inference workloads and come with comprehensive software support, including for popular AI frameworks and one-click deployment of Hugging Face models. The company is positioning these products as cost-effective alternatives for data centers running AI, with a focus on energy efficiency and a lower total cost of ownership. Saudi AI company Humain will be the first customer for the new chips, with plans to bring 200 megawatts of compute online starting in 2026. This announcement marks Qualcomm's second major attempt to break into the data center market. Its first effort with the Centriq family of processors in 2017 failed to make a significant dent in Intel's market share, and the company exited the market a year later. This time, however, the market is different. The explosive growth of AI has created a seemingly insatiable demand for powerful, efficient chips, a market currently dominated by Nvidia. The market's reaction was overwhelmingly positive. The Qualcomm stock price jumped 12% on Monday following the announcement, reflecting investor confidence in the company's new direction. This surge pushed QCOM shares to a new 52-week high of $182.23, giving the company a market capitalization of over $185 billion. With this new initiative, Qualcomm is making a direct play for a piece of the lucrative AI data center pie. While commercial availability of the new chips is still a year away (the AI200 in 2026 and the AI250 in 2027), the company has committed to an annual release cycle for its data center AI roadmap. The move represents a significant strategic expansion for Qualcomm and a new competitive threat to Nvidia's dominance. For investors in QCOM stock, it signals a clear commitment to capitalizing on the biggest trend in technology today: Artificial intelligence.
[8]
Qualcomm Stock Is Soaring Today After Chipmaker Makes a Big AI Move
The move could mean more competition for leading AI chipmakers such as Nvidia and AMD. Qualcomm is joining the competition to supply chips for AI data centers. Qualcomm (QCOM), which has largely focused on chips for mobile phones, laptops and other consumer devices to date, on Monday unveiled two AI accelerator chips for data centers: the AI200, which is set for release in 2026, and the AI250, which is planned for 2027. Shares of Qualcomm were up more than 13% in recent trading, pacing gainers in the S&P 500 and Nasdaq Composite. With Monday's gains, they've added about one-quarter of their value in 2025. The move marks a big step for Qualcomm, which said that it will be committed to a data center roadmap with an annual cadence for major releases or updates going forward. It could also mean more competition for leading AI chipmakers such as Nvidia (NVDA) and Advanced Micro Devices (AMD), with several analysts suggesting in recent weeks that they see Nvidia's dominant market share diminishing over time as competition and demand grows. Shares of Nvidia were up 2.5% in recent trading amid a broader market rally, while AMD shares were up slightly after hitting an all-time high at the open.
[9]
Qualcomm accelerates data center push with new AI chips launching next year
Qualcomm is expanding into the AI infrastructure market with two new data center chips, AI200 and AI250, set for commercial availability in 2026 and 2027. This move aims to diversify its business beyond smartphones and capitalize on the booming demand for AI hardware. The company also announced accelerator cards and racks based on these new chips. Qualcomm on Monday unveiled two artificial intelligence chips for data centers, with commercial availability from next year, as it pushes to diversify beyond smartphones and expand into the fast-growing AI infrastructure market. Shares of Qualcomm surged nearly 15% on the news. The new chips, called AI200 and AI250, are designed for improved memory capacity and running AI applications, or inference, and will be available in 2026 and 2027, respectively. Global investment in AI chips has soared as cloud providers, chipmakers and enterprises rush to build infrastructure capable of supporting complex, large language models, chatbots and other generative AI tools. Nvidia chips, however, underpin much of the current AI boom. Qualcomm, to strengthen its AI portfolio, agreed to buy Alphawave in June, which designs semiconductor tech for data centers, for about $2.4 billion. In May, Qualcomm also said it would make custom data center central processing units that use technology from Nvidia to connect to the firm's artificial intelligence chips. Qualcomm said the new chips support common AI frameworks and tools, with advanced software support, and added they will lower the total cost of ownership for enterprises. The San Diego-based company also unveiled accelerator cards and racks based on the new chips. Earlier this month, peer Intel announced a new artificial intelligence chip called Crescent Island for the data center that it plans to launch next year.
[10]
Qualcomm Unveils New AI Chips To Compete In Data Center Race - Qualcomm (NASDAQ:QCOM)
Qualcomm Technologies, Inc. (NASDAQ:QCOM) announced on Monday the launch of its next-generation artificial intelligence inference-optimized solutions for data centers, namely the Qualcomm AI200 and AI250 chip-based accelerator cards and racks. QCOM is showing upward movement. Get the complete analysis here Building on the company's leadership in Neural Processing Unit (NPU) technology, these solutions offer rack-scale performance and superior memory capacity for fast generative AI inference, delivering high performance per dollar per watt, Qualcomm said. Qualcomm AI200 introduces a purpose-built rack-level AI inference solution designed to deliver low total cost of ownership (TCO) and optimized performance for large language & multimodal model (LLM, LMM) inference, as well as other AI workloads. Also Read: Qualcomm And Valeo Broaden Collaboration To Speed Hands Off Driving Features Performance It supports 768 GB of LPDDR per card, offering higher memory capacity and lower cost, while enabling exceptional scale and flexibility for AI inference. The Qualcomm AI250 solution will debut with an innovative memory architecture based on near-memory computing, providing a generational leap in efficiency and performance for AI inference workloads by delivering more than 10 times higher effective memory bandwidth and significantly lower power consumption. This enables disaggregated AI inferencing for efficient utilization of hardware while meeting customer performance and cost requirements. Both rack solutions feature direct liquid cooling for thermal efficiency, PCIe for scale up, Ethernet for scale out, confidential computing for secure AI workloads, and a rack-level power consumption of 160 kW. Qualcomm AI200 and AI250 will be commercially available by 2026 and 2027, respectively. Competition Qualcomm's AI accelerator rivals include Nvidia Corp's (NASDAQ:NVDA) H100 and H200 chips, Advanced Micro Devices, Inc's (NASDAQ:AMD) Instinct MI300X accelerators, and Intel Corp's (NASDAQ:INTC) Gaudi accelerators. Alphabet Inc. (NASDAQ:GOOGL) Google has developed its own Tensor Processing Units (TPUs), which are optimized for popular machine learning frameworks, including TensorFlow and PyTorch. Amazon.com Inc. (NASDAQ:AMZN) Amazon Web Services (AWS) created Inferentia chips to help customers scale machine learning applications more effectively. Price Action: Qualcomm stock is trading higher by 0.97% to $170.58 premarket at last check Monday. Read Next: Qualcomm And Google Cloud Forge AI Alliance To Transform Cars Into Smart Agents Photo via Qualcomm QCOMQualcomm Inc$169.590.38%OverviewAMDAdvanced Micro Devices Inc$253.680.30%AMZNAmazon.com Inc$227.141.31%GOOGLAlphabet Inc$265.582.18%INTCIntel Corp$39.904.22%NVDANVIDIA Corp$190.892.48%Market News and Data brought to you by Benzinga APIs
[11]
Qualcomm looks to take on Nvidia, AMD as it enters AI accelerator market (QCOM:NASDAQ)
Qualcomm (NASDAQ:QCOM) announced its entry into the artificial intelligence accelerator market on Monday, as it looks to take on industry heavyweights Nvidia (NASDAQ:NVDA) and AMD (NASDAQ:AMD). The Cristiano Amon-led company unveiled its Qualcomm A1200 and AI250 chip-based accelerator cards and racks, all Qualcomm's entry with the AI200 and AI250 aims to compete with industry leaders, leveraging high memory capacity, lower costs, and innovative architecture; however, it is starting from a much smaller data center revenue base than Nvidia and AMD. Qualcomm targets the sizeable AI accelerator market, which AMD's CEO recently estimated could exceed $500 billion. Qualcomm is diversifying beyond handsets into automotive, IoT, and now AI accelerators; recent quarters showed revenue growth in automotive and IoT, indicating progress broadening its revenue sources.
[12]
Qualcomm stock skyrockets 19%, hits 52-week high after it announces chips to take on Nvidia and AMD
Qualcomm stock just soared 19% -- its biggest jump in over a year. The reason? Qualcomm announced new AI accelerator chips, the AI200 and AI250, built to rival Nvidia and AMD in powering AI data centers. With a $182 billion market cap and entry into a $6.7 trillion AI infrastructure boom, Qualcomm is no longer just a smartphone chipmaker -- it's going all-in on energy-efficient AI hardware that could redefine the next era of AI computing. Analysts broadly see Qualcomm's stock as a moderate buy with significant upside potential.
[13]
Qualcomm Enters AI Chip Market as Rival to Nvidia and AMD | PYMNTS.com
The San Diego-based company said the AI200 will be available in 2026 and the AI250 in early 2027. The processors are built for the "inference" phase of AI, where trained models are put to work on real-world tasks rather than being developed. Qualcomm said the chips can be installed individually or in full data-center racks and will support popular AI software frameworks to simplify deployment for businesses. Inference already represents a growing share of total computing demand and is expected to overtake training by 2026 as companies embed AI into customer support, financial forecasting and logistics workflows. Qualcomm said its new chips are optimized for performance per watt, a measure of how efficiently they process AI tasks. Internal testing cited by CNBC showed that an AI200 rack can deliver equivalent output using up to 35% less power than comparable GPU-based systems savings that could lower annual energy costs by millions of dollars for large data-center operators. Competitors are also expanding their offerings. AMD's MI325X accelerator, launched in September, is built for high-memory AI workloads, while Intel's Gaudi 3 emphasizes open-source integration. Qualcomm's approach differs by offering rack-scale inference systems, allowing enterprises to install complete configurations rather than assembling components. The company also announced a partnership with Saudi-based startup Humain, which plans to deploy about 200 megawatts of Qualcomm-powered AI systems starting in 2026, according to Investors.com. Qualcomm said the collaboration demonstrates the chips' readiness for enterprise-scale workloads across sectors including finance, manufacturing and healthcare. Qualcomm's move into AI infrastructure reflects its strategy to diversify beyond smartphones -- a market that has matured in recent years. The company completed a $2.4 billion acquisition of U.K.-based Alphawave IP Group in June to expand its connectivity and systems integration capabilities for large computing installations, Reuters reported. The launch positions Qualcomm in direct competition with Nvidia and Advanced Micro Devices (AMD), which dominate AI data-center hardware. As The Wall Street Journal noted, Qualcomm's entry signals that chipmakers are racing to capture enterprise demand as more companies build their own AI infrastructure rather than relying entirely on cloud providers. Qualcomm President Cristiano Amon told CNBC that the company aims to make AI "cost-efficient at scale," drawing on its experience building power-efficient mobile chips to improve energy performance in large computing environments. "The next stage of AI will be about running it everywhere efficiently," Amon said. Running AI systems at scale is costly. Every time a generative model answers a question, analyzes data or processes a transaction, it consumes computing power and electricity. Qualcomm said its new chips are engineered to deliver high performance with lower power use, potentially helping businesses manage AI expenses more predictably. While Nvidia continues to dominate AI training, Qualcomm's strategy targets inference, the layer where models perform the work. Nvidia continues to dominate training chips, but its near monopoly on inference is already eroding as firms like AMD, Intel and now Qualcomm introduce alternatives built around energy efficiency and modular deployment. For enterprises, the arrival of new chip suppliers could translate into more options for sourcing infrastructure and lower barriers to scaling AI tools. The data-center market is also expanding rapidly. Qualcomm's focus on power efficiency and cost predictability aims to attract enterprise buyers who measure success by operational stability and long-term total cost of ownership, rather than peak computing speed. If these new entrants succeed, enterprises could benefit from greater supply resilience and more competitive pricing in the years ahead. A more diverse chip supply chain may ease the GPU shortages that have constrained enterprise AI expansion, while competition among hardware vendors could lower infrastructure costs across the industry. As PYMNTS has reported, the global spending on AI infrastructure could exceed $2.8 trillion through 2029.
[14]
Qualcomm stock jumps after unveiling new AI chips to challenge Nvidia By Investing.com
Investing.com -- Qualcomm (NASDAQ:QCOM) stock rose 12% Monday after the company announced its entry into the data center AI market with two new inference-optimized chips designed to compete with Nvidia's (NASDAQ:NVDA) offerings. The semiconductor giant unveiled the Qualcomm AI200 and AI250 chip-based accelerator cards and racks, which the company claims will deliver rack-scale performance and superior memory capacity for generative AI inference workloads at industry-leading total cost of ownership. The AI200 solution features 768 GB of LPDDR memory per card, while the AI250 introduces an innovative near-memory computing architecture that provides over 10x higher effective memory bandwidth with lower power consumption. Both solutions include direct liquid cooling and support for PCIe and Ethernet connectivity. Qualcomm's new offerings come with a comprehensive software stack supporting popular AI frameworks and include features like one-click deployment of Hugging Face models. The company is positioning these products as cost-effective alternatives for data centers running AI inference workloads. The move represents Qualcomm's strategic expansion beyond its traditional mobile chip business into the rapidly growing data center AI market currently dominated by Nvidia. However, commercial availability is still some time away, with the AI200 expected in 2026 and the AI250 in 2027. Qualcomm also committed to an annual cadence for its data center AI roadmap going forward, focusing on inference performance, energy efficiency, and competitive total cost of ownership.
[15]
Qualcomm: positioning itself in the artificial intelligence chip market to compete with Nvidia and AMD
On Monday,Qualcomm unveiled its first chips for artificial intelligence data centers, the AI200 and AI250, which are set to launch in 2026 and 2027, respectively. This strategic shift marks the mobile semiconductor specialist's entry into a segment dominated by Nvidia and AMD. These new accelerators, designed to operate in parallel in complete liquid-cooled systems, are aimed at infrastructures hosting the most powerful AI models. They are based on Hexagon neural processing units (NPUs) already used in Qualcomm smartphones, but adapted here to an industrial scale. Following the announcement, Qualcomm's share price rose by nearly 9% during trading. The group intends to focus on the inference phase, i.e., the execution of AI models, promising better energy cost-efficiency than its competitors. Each rack would consume around 160 kilowatts, a level comparable to Nvidia systems, while offering operating savings. Qualcomm is also adopting a modular approach: its customers will be able to purchase either a complete system or components to integrate into their own architectures. This flexibility could appeal to cloud giants such as Amazon, Alphabet, and Microsoft, which are looking to diversify their suppliers. According to McKinsey, global investment in data centers is expected to reach $6.7 trillion by 2030, with the majority of that dedicated to AI. Qualcomm hopes to capitalize on this momentum to gain ground against Nvidia, which still holds over 90% of the GPU market. In May, the company signed an agreement with Saudi Arabia's Humain to deploy its inference chips in regional infrastructures totaling up to 200 megawatts of power. Finally, Qualcomm points out that its AI cards, capable of handling up to 768 GB of memory, offer substantial gains in energy consumption and memory performance compared to competing solutions.
[16]
Qualcomm accelerates data center push with new AI chips launching next year
(Reuters) -Qualcomm on Monday unveiled two artificial intelligence chips for data centers, with commercial availability from next year, as it pushes to diversify beyond smartphones and expand into the fast-growing AI infrastructure market. Shares of Qualcomm surged nearly 15% on the news. The new chips, called AI200 and AI250, are designed for improved memory capacity and running AI applications, or inference, and will be available in 2026 and 2027, respectively. Global investment in AI chips has soared as cloud providers, chipmakers and enterprises rush to build infrastructure capable of supporting complex, large language models, chatbots and other generative AI tools. Nvidia chips, however, underpin much of the current AI boom. Qualcomm, to strengthen its AI portfolio, agreed to buy Alphawave in June, which designs semiconductor tech for data centers, for about $2.4 billion. In May, Qualcomm also said it would make custom data center central processing units that use technology from Nvidia to connect to the firm's artificial intelligence chips. Qualcomm said the new chips support common AI frameworks and tools, with advanced software support, and added they will lower the total cost of ownership for enterprises. The San Diego-based company also unveiled accelerator cards and racks based on the new chips. Earlier this month, peer Intel announced a new artificial intelligence chip called Crescent Island for the data center that it plans to launch next year. (Reporting by Harshita Mary Varghese in Bengaluru; Editing by Vijay Kishore)
Share
Share
Copy Link
Qualcomm announces AI200 and AI250 chips for data centers, marking its entry into the booming AI infrastructure market. The move represents a significant shift for the company, known primarily for mobile processors, as it aims to compete with industry leaders like Nvidia and AMD.
Qualcomm, a company primarily known for its mobile processors, is making a significant leap into the booming AI infrastructure market with the announcement of two new AI chips: the AI200 and AI250
1
2
. This move represents a direct challenge to Nvidia's dominance in the AI chip market, which currently holds over three-quarters of the market share for specialized AI processors3
.
Source: Economic Times
The AI200 and AI250 chips are based on Qualcomm's Hexagon neural processing units (NPUs), which have been customized for data center AI workloads
2
. These chips are designed for deploying AI models rather than training them, focusing on inference capabilities1
.
Source: Benzinga
Key features of the AI200, set to launch in 2026, include:
2
The AI250, scheduled for 2027, promises a "generational leap in efficiency" with:
2
3
The announcement of these chips has had a significant impact on Qualcomm's market position. The company's shares surged by up to 20% following the news, pushing its market capitalization above $200 billion
3
.
Source: PYMNTS
Qualcomm has secured its first customer for the new chips: Humain, an AI company backed by Saudi Arabia's Public Investment Fund. Humain plans to deploy 200 megawatts of Qualcomm's AI accelerators starting in 2026, as part of Saudi Arabia's ambition to become a hub for artificial intelligence
3
5
.Related Stories
The entry of Qualcomm into the data center AI chip market comes at a time of unprecedented demand for AI infrastructure. McKinsey estimates that nearly $6.7 trillion in capital expenditures will be spent on data centers through 2030, with the majority going to systems based around AI chips
4
.Qualcomm is positioning its new chips as more energy-efficient alternatives to current offerings, potentially providing cost savings over the long term
5
. The company has committed to annual launches of AI chips, following a pattern established by Nvidia3
.As the AI chip market continues to evolve, Qualcomm's entry marks a new phase of competition in this rapidly growing sector, potentially reshaping the landscape of AI infrastructure in the coming years.
Summarized by
Navi
[2]
[3]
19 May 2025β’Technology

22 Oct 2024β’Technology

04 Sept 2024
