25 Sources
[1]
AMD turns to AI startups to inform chip, software design
SAN JOSE, June 13 (Reuters) - Advanced Micro Devices has forged close ties to a batch of artificial intelligence startups as part of the company's effort to bolster its software and forge superior chip designs. As AI companies seek alternatives to Nvidia's chips, AMD has begun to expand its plans to build a viable competing line of hardware, acquiring companies such as server maker ZT Systems in its quest to achieve that goal. But to build a successful line of chips also requires a powerful set of software to efficiently run the programs built by AI developers. AMD has acquired several small software companies in recent weeks in a bid to boost its talent, and it has been working to beef up its set of software, broadly known as ROCm. "This will be a very thoughtful, deliberate, multi-generational journey for us," said Vamsi Boppana, senior vice president of AI at AMD. AMD has committed to improve its ROCm and other software, which is a boon to customers such as AI enterprise startup Cohere, as it results in speedy changes and the addition of new features. Cohere is focused on building AI models that are tailored for large businesses versus the foundational AI models that companies like OpenAI and others target. AMD has made important strides in improving its software, Cohere CEO Aidan Gomez said in an interview with Reuters. Changing Cohere's software to run on AMD chips was a process that previously took weeks and now happens in only "days," Gomez said. Gomez declined to disclose exactly how much of Cohere's software relies on AMD chips but called it a "meaningful segment of our compute base" around the world. OPENAI INFLUENCE OpenAI has had significant influence on the design of the forthcoming MI450 series of AI chips, said Forrest Norrod, an executive vice president at AMD. AMD's MI400 series of chips will be the basis for a new server called "Helios" that the company plans to release next year. Nvidia too has engineered whole servers in part because AI computations require hundreds or thousands of chips strung together. OpenAI's Sam Altman appeared on stage at AMD's Thursday event in San Jose, and discussed the partnership between the two companies in broad terms. Norrod said that OpenAI's requests had a big influence on how AMD designed the MI450 series memory architecture and how the hardware can scale up to thousands of chips necessary to build and run AI applications. The ChatGPT creator also influenced what kinds of mathematical operations the chips are optimized for. "(OpenAI) has given us a lot of feedback that, I think, heavily informed our design," Norrod said. Reporting by Max A. Cherney in San Jose Editing by Shri Navaratnam Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Artificial Intelligence Max A. Cherney Thomson Reuters Max A. Cherney is a correspondent for Reuters based in San Francisco, where he reports on the semiconductor industry and artificial intelligence. He joined Reuters in 2023 and has previously worked for Barron's magazine and its sister publication, MarketWatch. Cherney graduated from Trent University with a degree in history.
[2]
AMD CEO unveils new AI chips
SAN JOSE, June 12 (Reuters) - Advanced Micro Devices (AMD.O), opens new tab CEO Lisa Su showed off a new crop of artificial intelligence chips that will compete with the flagship processors designed by Nvidia (NVDA.O), opens new tab. AMD shares were roughly flat in early afternoon trading. Su took the stage to discuss the MI350 series and MI400 series AI chips that she said would compete with Nvidia's Blackwell line of processors. During her speech, executives from X.AI, Meta Platforms and Oracle took to the stage to discuss their respective uses of AMD processors. AMD's Su reiterated the company's product plans for the next year, which will roughly match the annual release schedule that Nvidia began with its Blackwell chips. AMD has struggled to siphon off a portion of the quickly growing market for artificial intelligence chips from the dominant Nvidia (NVDA.O), opens new tab. But the company has made a concerted effort to improve its software and produce a line of chips that rival Nvidia's performance. Thursday's event, called "Advancing AI," will focus on AMD's data center chips and other hardware. AMD completed the acquisition of server builder ZT Systems in March. As a result, AMD is expected to launch new complete AI systems, similar to several of the server-rack-sized products Nvidia produces. Santa Clara, California-based AMD has made a series of small acquisitions in recent weeks and has added talent to its chip design and AI software teams. At the event, Su said the company had acquired 25 companies in the past year that were related to the company's AI plans. Last week, AMD hired the team from chip startup Untether AI. On Wednesday, AMD said it had hired several employees from generative AI startup Lamini, including the co-founder and CEO. AMD's software called ROCm has struggled to gain traction against Nvidia's CUDA, which is seen by some industry insiders as a key part of protecting the company's dominance. When AMD reported earnings in May, Su said that despite increasingly aggressive curbs on AI chip exports to China, AMD still expected strong double-digit growth from AI chips. Reporting by Max A. Cherney in San Jose, Stephen Nellis in San Francisco and Arsheeya Bajwa in Bengaluru; Editing by Leslie Adler and Marguerita Choy Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Disrupted Max A. Cherney Thomson Reuters Max A. Cherney is a correspondent for Reuters based in San Francisco, where he reports on the semiconductor industry and artificial intelligence. He joined Reuters in 2023 and has previously worked for Barron's magazine and its sister publication, MarketWatch. Cherney graduated from Trent University with a degree in history.
[3]
AMD reveals next-generation AI chips with OpenAI CEO Sam Altman
OpenAI CEO Sam Altman poses during the Artificial Intelligence (AI) Action Summit, at the Grand Palais, in Paris, on February 11, 2025. AMD's rack-scale technology also enables its latest chips to compete with Nvidia's Blackwell chips, which already come in configurations with 72 graphics-processing units stitched together. Nvidia is AMD's primary and only rival in big data center GPUs for developing and deploying AI applications. OpenAI -- a notable Nvidia customer -- has been giving AMD feedback on its MI400 roadmap, the chip company said. With the MI400 chips and this year's MI355X chips, AMD is planning to compete against rival Nvidia on price, with a company executive telling reporters on Wednesday that the chips will cost less to operate thanks to lower power consumption, and that AMD is undercutting Nvidia with "aggressive" prices. So far, Nvidia has dominated the market for data center GPUs, partially because it was the first company to develop the kind of software needed for AI developers to take advantage of chips originally designed to display graphics for 3D games. Over the past decade, before the AI boom, AMD focused on competing against Intel in server CPUs. Su said that AMD's MI355X can outperform Nvidia's Blackwell chips, despite Nvidia using its "proprietary" CUDA software. "It says that we have really strong hardware, which we always knew, but it also shows that the open software frameworks have made tremendous progress," Su said. AMD shares are flat so far in 2025, signaling that Wall Street doesn't yet see it as a major threat to Nvidia's dominance. Andrew Dieckmann, AMD's general manger for data center GPUs, said Wednesday that AMD's AI chips would cost less to operate and less to acquire. "Across the board, there is a meaningful cost of acquisition delta that we then layer on our performance competitive advantage on top of, so significant double-digit percentage savings," Dieckmann said. Over the next few years, big cloud companies and countries alike are poised to spend hundreds of billions of dollars to build new data center clusters around GPUs in order to accelerate the development of cutting-edge AI models. That includes $300 billion this year alone in planned capital expenditures from megacap technology companies. AMD is expecting the total market for AI chips to exceed $500 billion by 2028, although it hasn't said how much of that market it can claim -- Nvidia has over 90% of the market currently, according to analyst estimates. Both companies have committed to releasing new AI chips on an annual basis, as opposed to a biannual basis, emphasizing how fierce competition has become and how important bleeding-edge AI chip technology is for companies like Microsoft, Oracle and Amazon. AMD has bought or invested in 25 AI companies in the past year, Su said, including the purchase of ZT Systems earlier this year, a server maker that developed the technology AMD needed to build its rack-sized systems. "These AI systems are getting super complicated, and full-stack solutions are really critical," Su said.
[4]
AMD's new AI roadmap spans GPUs, networking, software, and rack architectures
Editor's take: In the ever-evolving world of GenAI, important advances are happening across chips, software, models, networking, and systems that combine all these elements. That's what makes it so hard to keep up with the latest AI developments. The difficulty factor becomes even greater if you're a vendor building these kinds of products and working not only to keep up, but to drive those advances forward. Toss in a competitor that's virtually cornered the market - and in the process, grown into one of the world's most valuable companies - and, well, things can appear pretty challenging. That's the situation AMD found itself in as it entered its latest Advancing AI event. But rather than letting these potential roadblocks deter them, AMD made it clear that they are inspired to expand their vision, their range of offerings, and the pace at which they are delivering new products. From unveiling their Instinct MI400 GPU accelerators and next-generation "Vulcan" networking chips, to version 7 of their ROCm software and the debut of a new Helios Rack architecture. AMD highlighted all the key aspects of AI infrastructure and GenAI-powered solutions. In fact, one of the first takeaways from the event was how far the company's reach now extends across all the critical parts of the AI ecosystem. As expected, there was a great deal of focus on the official launch of the Instinct MI350 and higher-wattage, faster-performing MI355X GPU-based chips, which AMD had previously announced last year. Both are built on a 3nm process and feature up to 288 MB of HBM3E memory and can be used in both liquid-cooled and air-cooled designs. According to AMD's testing, these chips not only match Nvidia's Blackwell 200 performance levels, but even surpass them on certain benchmarks. In particular, AMD emphasized improvements in inferencing speed (over 3x faster than the previous generation), as well as cost per token (up to 40% more tokens per dollar vs. the B200, according to AMD). AMD also provided more details on its next-generation MI400, scheduled for release next year, and even teased the MI500 for 2027. The MI400 will offer up to 432 GB of HBM4 memory, memory bandwidth of 19.6 TB/sec, and 300 GB/sec of scale-out memory bandwidth - all of which will be important for both running larger models and assembling the kinds of large rack systems expected to be needed for next-generation LLMs. Some of the more surprising announcements from the event focused on networking. First was a discussion of AMD's next-generation Pensando networking chip and a network interface card they're calling the AMD Pensando Pollara 400 AI NIC, which the company claims is the industry's first shipping AI-powered network card. AMD is part of the Ultra Ethernet Consortium and, not surprisingly, the Pollara 400 uses the Ultra Ethernet standard. It reportedly offers 20% improvements in speed and 20x more capacity to scale than competitive cards using InfiniBand technology. As with its GPUs, AMD also announced its next-generation networking chip, codenamed "Vulcano," designed for large AI clusters. It will offer 800 GB/sec network speeds and up to 8x the scale-out performance for large groups of GPUs when released in 2026. AMD also touted the new open-source Ultra Accelerator Link (UAL) standard for GPU-to-GPU and other chip-to-chip connections. A direct answer to Nvidia's NVLink technology, UAL is based on AMD's Infinity Fabric and matches the performance of Nvidia's technology while providing more flexibility by enabling connections between any company's GPUs and CPUs. Putting all of these various elements together, arguably the biggest hardware news - both literally and figuratively - from the Advancing AI event was AMD's new rack architecture designs. Large cloud providers, neocloud operators, and even some sophisticated enterprises have been moving toward rack-based complete solutions for their AI infrastructure, so it was not surprising to see AMD make these announcements - particularly after acquiring expertise from ZT Systems, a company that designs rack computing systems, earlier this year. Still, it was an important step to show a complete competitive offering with even more advanced capabilities against Nvidia's NVL72 and to demonstrate how all the pieces of AMD's silicon solutions can work together. In addition to showing systems based on their current 2025 chip offerings, AMD also unveiled their Helios rack architecture, coming in 2026. It will leverage a complete suite of AMD chips, including next-generation Epyc CPUs (codenamed Venice), Instinct MI400 GPUs, and the Vulcano networking chip. What's important about Helios is that it demonstrates AMD will not only be on equal footing with next-generation Vera Rubin-based rack systems Nvidia has announced for next year, but may even surpass them. In fact, AMD arguably took a page from the recent Nvidia playbook by offering a multi-year preview of its silicon and rack-architecture roadmaps, making it clear that they are not resting on their laurels but moving aggressively forward with critical technology developments. Importantly, they did so while touting what they expect will be equivalent or better performance from these new options. (Of course, all of these are based on estimates of expected performance, which could - and likely will - change for both companies.) Regardless of what the final numbers prove to be, the bigger point is that AMD is clearly confident enough in its current and future product roadmaps to take on the toughest competition. That says a lot. As mentioned earlier, the key software story for AMD was the release of version 7 of its open-source ROCm software stack. The company highlighted multiple performance improvements on inferencing workloads, as well as increased day-zero compatibility with many of the most popular LLMs. They also discussed ongoing work with other critical AI software frameworks and development tools. There was a particular focus on enabling enterprises to use ROCm for their own in-house development efforts through ROCm Enterprise AI. On their own, some of these changes are modest, but collectively they show clear software momentum that AMD has been building. Strategically, this is critical, because competition against Nvidia's CUDA software stack continues to be the biggest challenge AMD faces in convincing organizations to adopt its solutions. It will be interesting to see how AMD integrates some of its recent AI software-related acquisitions - including Lamini, Brium, and Untether AI - into its range of software offerings. One of the more surprising bits of software news from AMD was the integration of ROCm support into Windows and the Windows ML AI software stack. This helps make Windows a more useful platform for AI developers and potentially opens up new opportunities to better leverage AMD GPUs and NPUs for on-device AI acceleration. Speaking of developers, AMD also used the event to announce its AMD Developer Cloud for software designers, which gives them a free resource (at least initially, via free cloud credits) to access MI300-based infrastructure and build applications with ROCm-based software tools. Again, a small but critically important step in demonstrating how the company is working to expand its influence across the AI software development ecosystem. Clearly, the collective actions the company is taking are starting to make an impact. AMD welcomed a broad range of customers leveraging its solutions in a big way, including OpenAI, Microsoft, Oracle Cloud, Humane, Meta, xAI, and many more. They also talked about all their work in creating sovereign AI deployments in countries around the world. And ultimately, as the company started the keynote with, it's all about continuing to build trust among its customers, partners and potential new clients. AMD has the benefit of being an extremely strong alternative to Nvidia - one that many in the market want to see increase its presence for competitive balance. Based on what was announced at Advancing AI, it looks like AMD is moving in the right direction. Bob O'Donnell is the founder and chief analyst of TECHnalysis Research, LLC a technology consulting firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on X @bobodtech
[5]
AI chip war heats up as AMD unveils its Nvidia Blackwell competitor
AMD claims it has exceeded its energy efficiency goals, lays out bolder goals AMD has unveiled its Instinct MI350 Series GPUs, promising a staggering 4x improvement to AI performance compared with the previous generation chips - enough to have Nvidia worried about the market dominance of its Blackwell chips. Company CEO Lisa Su also revealed details of the Helios AI Rack, which is to be built on next-generation Instinct MI400 Series GPUs as well as AMD EPYV Venice CPUs and AMD Pensando Vulcano NICs. The news came at AMD's Advancing AI 2025 conference, together with a series of other hardware, software and AI announcements. Besides the 4x improvement to AI performance, AMD also boasts an eyewatering 35x generational improvement in inferencing as well as price-performance gains, unlocking 40% more tokens-per-dollar compared to its key like-for-like rival, the Nvidia B200. Despite Nvidia's market dominance, AMD proudly claims that seven in 10 of the largest model builders and Al companies use its Instinct accelerators, including Meta, OpenAI, Microsoft and xAI. MI300X has been deployed for Llama 3 and 4 inferencing with Meta and proprietary and open-source models with Azure, among others. Besides performance, AMD is also honing in on its environmental goals, claiming that its MI350 Series GPUs exceeded the five-year organizational goal to improve the energy efficiency of AI training and high-performance computing nodes by 30x - by reaching a figure of 38x. By 2030, the company also wants to increase rack-scale energy efficiency by 20x compared with 2024, and it already predicts a 95% reduction in electricity for typical AI model training. Looking ahead, Instinct MI400 Series GPUs are expected to deliver up to 10x more performance running inference on Mixture of Experts models. Despite the bold claims, AMD's market cap remains considerably lower than Nvidia's, reaching $192.14 billion at press time. "AMD is driving AI innovation at an unprecedented pace, highlighted by the launch of our AMD Instinct MI350 series accelerators, advances in our next generation AMD 'Helios' rack-scale solutions, and growing momentum for our ROCm open software stack," said Su. "We are entering the next phase of AI, driven by open standards, shared innovation and AMD's expanding leadership across a broad ecosystem of hardware and software partners who are collaborating to define the future of AI."
[6]
AMD is 'Su' Ready for AI | AIM
AMD CEO Lisa Su said the company expects the market for AI processors to exceed $500 billion by 2028. This year, AMD's Advancing AI event was on another level. The company made it clear it's no longer afraid of NVIDIA. It introduced the new Instinct MI350 Series GPUs, built on the CDNA 4 architecture, promising a fourfold generational improvement in AI compute and a 35x leap in inferencing performance. It also launched ROCm 7.0, its open software stack for GPU computing and previewed the upcoming MI400 Series and Helios AI rack infrastructure. The company said that MI350X and MI355X GPUs feature 288GB of HBM3E memory and offer up to 8TB/s of memory bandwidth. "MI355 delivers 35x higher throughput when running at ultra-low latencies, which is required for some real-time applications like code completion, simultaneous translation, and transcription," said AMD CEO Lisa Su. Su said that models like Llama 4 Maverick and DeepSeek R1 have seen triple the tokens per second on the MI355 compared to the previous generation. This leads to faster responses and higher user throughput. "The MI355 offers up to 40% more tokens per dollar compared to NVIDIA B200," she added. Each MI355X platform can deliver up to 161 PFLOPs of FP4 performance using structured sparsity. The series supports both air-cooled (64 GPUs) and direct liquid-cooled (128 GPUs) configurations, offering up to 2.6 exaFLOPs of FP4/FP6 compute. The Instinct MI400 Series, expected in 2026, will feature up to 432GB of HBM4 memory and 19.6TB/s of bandwidth. It is set to deliver 40 PFLOPs of FP4 and 20 PFLOPs of FP8 performance. Speaking about the company's open-source software ROCm, Vamsi Boppana, senior vice president of AMD's artificial intelligence group, said it now powers some of the largest AI platforms in the world, supporting major models like Llama and DeepSeek from day one, and delivering over 3.5x inference gains in the upcoming ROCm 7 release. He added that frequent updates, support for FP4 data types, and new algorithms like FAv3 are helping ROCm deliver better performance and push open-source frameworks like vLLM and SGLang ahead of closed-source options. "With over 1.8 million Hugging Face models running out of the box, industry benchmarks now in play, ROCm is not just catching up -- it's leading the open AI revolution," he added. AMD is working with leading AI companies, including Meta, OpenAI, xAI, Oracle, Microsoft, Cohere, HUMAIN, Red Hat, Astera Labs and Marvell. Su said the company expects the market for AI processors to exceed $500 billion by 2028. The event, which took place in San Jose, California, also saw OpenAI CEO Sam Altman sharing the stage with Su. "We are working closely with AMD on infrastructure for research and production. Our GPT models are running on MI300X in Azure, and we're deeply engaged in design efforts on the MI400 Series," Altman said. On the other hand, Meta said its Llama 3 and Llama 4 inference workloads are running on MI300X and that it expects further improvements from the MI350 and MI400 Series. Oracle Cloud Infrastructure is among the first to adopt the new system, with plans to offer zettascale AI clusters comprising up to 131,072 MI355X GPUs. Microsoft confirmed that proprietary and open-source models are now running in production on Azure using the MI300X. Cohere said its Command models use the MI300X for enterprise inference. HUMAIN announced a partnership with AMD to build a scalable and cost-efficient AI platform using AMD's full compute portfolio. AMD announced its new open standard rack-scale infrastructure to meet the rising demands of agentic AI workloads, launching solutions that integrate Instinct MI350 GPUs, 5th Gen EPYC CPUs, and Pensando Pollara NICs. "We have taken the lead on helping the industry develop open standards, allowing everyone in the ecosystem to innovate and work together to drive AI forward. We utterly reject the notion that one company could have a monopoly on AI or AI innovation," said Forrest Norrod, AMD's executive vice president. The company also previewed Helios, its next-generation rack platform built around the upcoming MI400 GPUs and Venice CPUs. Su said Venice is built on TSMC's 2-nanometer process, features up to 256 high-performance Zen 6 cores, and delivers 70% more compute performance than AMD's current-generation leadership CPUs. "Helios functions like a single, massive compute engine. It connects up to 72 GPUs with 260 terabytes per second of scale-up bandwidth, enabling 2.9 exaflops of FP4 performance," she said, adding that compared to the competition, it supports 50% more HBM4 memory, memory bandwidth, and scale-out bandwidth. AMD's Venice CPUs bring up to 256 cores and higher memory bandwidth, while Vulcano AI NICs support 800G networking and UALink. "Choosing the right CPU gets the most out of your GPU," said Norrod. Helios uses UALink to connect 72 GPUs as a unified system, offering open, vendor-neutral scale-up performance. Describing UALink as a key differentiator, Norrod said one of its most important features is that it's "an open ecosystem" -- a protocol that works across systems regardless of the CPU, accelerator, or switch brand. He added that AMD believes that open interoperability accelerates innovation, protects customer choice, and still delivers leadership performance and efficiency. As AI workloads grow in complexity and scale, AMD says a unified stack is necessary, combining high-performance GPUs, CPUs, and intelligent networking to support multi-agent systems across industries. The currently available solution supports up to 128 Instinct MI350 GPUs per rack with up to 36TB of HBM3E memory. The infrastructure is built on Open Compute Project (OCP) standards and Ultra Ethernet Consortium (UEC) compliance, allowing interoperability with existing infrastructure. OCI will be among the first to adopt the MI355X-based rack-scale platform. "We will be one of the first to provide the MI355X rack-scale infrastructure using the combined power of EPYC, Instinct, and Pensando," said Mahesh Thiagarajan, EVP at OCI. Besides that, the new Helios rack solution, expected in 2026, brings tighter integration and higher throughput. It includes next-gen MI400 GPUs, offering up to 432GB of HBM4 memory and 40 petaflops of FP4 performance.
[7]
AMD's Su-premacy Begins | AIM
This year, AMD's Advancing AI event was on another level. The company made it clear it's no longer afraid of NVIDIA. It introduced the new Instinct MI350 Series GPUs, built on the CDNA 4 architecture, promising a fourfold generational improvement in AI compute and a 35x leap in inferencing performance. It also launched ROCm 7.0, its open software stack for GPU computing and previewed the upcoming MI400 Series and Helios AI rack infrastructure. The company said that MI350X and MI355X GPUs feature 288GB of HBM3E memory and offer up to 8TB/s of memory bandwidth. "MI355 delivers 35x higher throughput when running at ultra-low latencies, which is required for some real-time applications like code completion, simultaneous translation, and transcription," said AMD CEO Lisa Su. Su said that models like Llama 4 Maverick and DeepSeek R1 have seen triple the tokens per second on the MI355 compared to the previous generation. This leads to faster responses and higher user throughput. "The MI355 offers up to 40% more tokens per dollar compared to NVIDIA B200," she added. Each MI355X platform can deliver up to 161 PFLOPs of FP4 performance using structured sparsity. The series supports both air-cooled (64 GPUs) and direct liquid-cooled (128 GPUs) configurations, offering up to 2.6 exaFLOPs of FP4/FP6 compute. The Instinct MI400 Series, expected in 2026, will feature up to 432GB of HBM4 memory and 19.6TB/s of bandwidth. It is set to deliver 40 PFLOPs of FP4 and 20 PFLOPs of FP8 performance. Speaking about the company's open-source software ROCm, Vamsi Boppana, senior vice president of AMD's artificial intelligence group, said it now powers some of the largest AI platforms in the world, supporting major models like Llama and DeepSeek from day one, and delivering over 3.5x inference gains in the upcoming ROCm 7 release. He added that frequent updates, support for FP4 data types, and new algorithms like FAv3 are helping ROCm deliver better performance and push open-source frameworks like vLLM and SGLang ahead of closed-source options. "With over 1.8 million Hugging Face models running out of the box, industry benchmarks now in play, ROCm is not just catching up -- it's leading the open AI revolution," he added. AMD is working with leading AI companies, including Meta, OpenAI, xAI, Oracle, Microsoft, Cohere, HUMAIN, Red Hat, Astera Labs and Marvell. Su said the company expects the market for AI processors to exceed $500 billion by 2028. The event, which took place in San Jose, California, also saw OpenAI CEO Sam Altman sharing the stage with Su. "We are working closely with AMD on infrastructure for research and production. Our GPT models are running on MI300X in Azure, and we're deeply engaged in design efforts on the MI400 Series," Altman said. On the other hand, Meta said its Llama 3 and Llama 4 inference workloads are running on MI300X and that it expects further improvements from the MI350 and MI400 Series. Oracle Cloud Infrastructure is among the first to adopt the new system, with plans to offer zettascale AI clusters comprising up to 131,072 MI355X GPUs. Microsoft confirmed that proprietary and open-source models are now running in production on Azure using the MI300X. Cohere said its Command models use the MI300X for enterprise inference. HUMAIN announced a partnership with AMD to build a scalable and cost-efficient AI platform using AMD's full compute portfolio. AMD announced its new open standard rack-scale infrastructure to meet the rising demands of agentic AI workloads, launching solutions that integrate Instinct MI350 GPUs, 5th Gen EPYC CPUs, and Pensando Pollara NICs. "We have taken the lead on helping the industry develop open standards, allowing everyone in the ecosystem to innovate and work together to drive AI forward. We utterly reject the notion that one company could have a monopoly on AI or AI innovation," said Forrest Norrod, AMD's executive vice president. The company also previewed Helios, its next-generation rack platform built around the upcoming MI400 GPUs and Venice CPUs. Su said Venice is built on TSMC's 2-nanometer process, features up to 256 high-performance Zen 6 cores, and delivers 70% more compute performance than AMD's current-generation leadership CPUs. "Helios functions like a single, massive compute engine. It connects up to 72 GPUs with 260 terabytes per second of scale-up bandwidth, enabling 2.9 exaflops of FP4 performance," she said, adding that compared to the competition, it supports 50% more HBM4 memory, memory bandwidth, and scale-out bandwidth. AMD's Venice CPUs bring up to 256 cores and higher memory bandwidth, while Vulcano AI NICs support 800G networking and UALink. "Choosing the right CPU gets the most out of your GPU," said Norrod. Helios uses UALink to connect 72 GPUs as a unified system, offering open, vendor-neutral scale-up performance. Describing UALink as a key differentiator, Norrod said one of its most important features is that it's "an open ecosystem" -- a protocol that works across systems regardless of the CPU, accelerator, or switch brand. He added that AMD believes that open interoperability accelerates innovation, protects customer choice, and still delivers leadership performance and efficiency. As AI workloads grow in complexity and scale, AMD says a unified stack is necessary, combining high-performance GPUs, CPUs, and intelligent networking to support multi-agent systems across industries. The currently available solution supports up to 128 Instinct MI350 GPUs per rack with up to 36TB of HBM3E memory. The infrastructure is built on Open Compute Project (OCP) standards and Ultra Ethernet Consortium (UEC) compliance, allowing interoperability with existing infrastructure. OCI will be among the first to adopt the MI355X-based rack-scale platform. "We will be one of the first to provide the MI355X rack-scale infrastructure using the combined power of EPYC, Instinct, and Pensando," said Mahesh Thiagarajan, EVP at OCI. Besides that, the new Helios rack solution, expected in 2026, brings tighter integration and higher throughput. It includes next-gen MI400 GPUs, offering up to 432GB of HBM4 memory and 40 petaflops of FP4 performance.
[8]
AMD Unveils AI Server as OpenAI Taps Its Newest Chips
The move comes as the competition between Nvidia and other AI chip firms Advanced Micro Devices CEO Lisa Su on Thursday unveiled a new artificial intelligence server for 2026 that aims to challenge Nvidia's flagship offerings as OpenAI's CEO said the ChatGPT creator would adopt AMD's latest chips. Su took the stage at a developer conference in San Jose, California, called "Advancing AI" to discuss the MI350 series and MI400 series AI chips that she said would compete with Nvidia's Blackwell line of processors The MI400 series of chips will be the basis of a new server called "Helios" that AMD plans to release next year. The move comes as the competition between Nvidia and other AI chip firms has shifted away from selling individual chips to selling servers packed with scores or even hundreds of processors, woven together with networking chips from the same company. The AMD Helios servers will have 72 of AMD's MI400 series chips, making them comparable to Nvidia's current NVL72 servers, AMD executives said. During its keynote presentation, AMD said that many aspects of the Helios servers - such as the networking standards - would be made openly available and shared with competitors such as Intel. The move was a direct swipe at market leader Nvidia, which uses proprietary technology called NVLink to string together its chips but has recently started to license that technology as pressure mounts from rivals. "The future of AI is not going to be built by any one company or in a closed ecosystem. It's going to be shaped by open collaboration across the industry," Su said. Su was joined onstage by OpenAI's Sam Altman. The ChatGPT creator is working with AMD on the firm's MI450 chips to improve their design for AI work. "Our infrastructure ramp-up over the last year, and what we're looking at over the next year, have just been a crazy, crazy thing to watch," Altman said. During her speech, executives from Elon Musk-owned xAI, Meta Platforms and Oracle took to the stage to discuss their respective uses of AMD processors. Crusoe, a cloud provider that specialises in AI, told Reuters it is planning to buy $400 million (roughly Rs. 3,440 crore) of AMD's new chips. AMD's Su reiterated the company's product plans for the next year, which will roughly match the annual release schedule that Nvidia began with its Blackwell chips. AMD shares ended 2.2 percent lower after the company's announcement. Kinngai Chan, an analyst at Summit Insights, said the chips announced on Thursday were not likely to immediately change AMD's competitive position. AMD has struggled to siphon off a portion of the quickly growing market for AI chips from the dominant Nvidia. But the company has made a concerted effort to improve its software and produce a line of chips that rival Nvidia's performance. AMD completed the acquisition of server builder ZT Systems in March. As a result, AMD is expected to launch new complete AI systems, similar to several of the server-rack-sized products Nvidia produces. Santa Clara, California-based AMD has made a series of small acquisitions in recent weeks and has added talent to its chip design and AI software teams. At the event, Su said the company has made 25 strategic investments in the past year that were related to the company's AI plans. Last week, AMD hired the team from chip startup Untether AI. On Wednesday, AMD said it had hired several employees from generative AI startup Lamini, including the co-founder and CEO. AMD's software called ROCm has struggled to gain traction against Nvidia's CUDA, which is seen by some industry insiders as a key part of protecting the company's dominance. When AMD reported earnings in May, Su said that despite increasingly aggressive curbs on AI chip exports to China, AMD still expected strong double-digit growth from AI chips. © Thomson Reuters 2025
[9]
AMD Unveils Its Latest Chips, With ChatGPT Maker OpenAI Among Its Customers
AMD (AMD) unveiled its next-generation MI400 chips at its "Advancing AI" event Thursday. The chip isn't expected to launch until 2026, but it already has some high-profile customers, including OpenAI. OpenAI CEO Sam Altman joined AMD CEO Lisa Su onstage Thursday to highlight the ChatGPT developer's partnership with AMD on AI infrastructure and announce that it will make use of the MI400 series. "When you first started telling me about the specs, I was like, there's no way, that just sounds totally crazy," Altman said. "It's gonna be an amazing thing." AMD said it counts Meta (META), xAI, Oracle (ORCL), Microsoft (MSFT), Astera Labs (ALAB), and Marvell Technology (MRVL) among its partners as well. AMD showcased its AI server rack architecture at the event, which will combine MI400 chips into one larger system known as Helios. The company compared it to rival Nvidia's (NVDA) Vera Rubin, also expected in 2026. The event also brought the launch of AMD's Instinct MI350 Series GPUs, which it claims offers four times more computing power than its previous generation. Shares of AMD slid about 2% Thursday, leaving the stock down just under 2% for 2025 so far.
[10]
AMD's CEO Claims Their New Chips 'Match' Nvidia's at a Lower Price, and Even Sam Altman Is Excited: 'An Amazing Thing'
Though Nvidia is the undisputed leader in AI chips, capturing over 80% of the market, AMD CEO Lisa Su says AMD's latest chips are "outperforming" Nvidia's with "greater efficiency." Su said at an AMD launch event on Thursday in San Jose, California, that AMD's new MI350 chips are up to 35 times faster than previous generations, per Bloomberg. The MI350 chips began shipping out earlier this month. Related: 'The Decade of Autonomous Vehicles': Nvidia CEO Predicts Major Growth in Robotics, Self-Driving Cars When it comes to running AI programs, Su claims that AMD's MI355 chip offers "greater efficiency" compared to Nvidia's B200 and GB200 chips, which were released in 2024. She said that the MI355X chip "matches the performance of the significantly more expensive and complex [Nvidia] GB200" at a lower price point. Demand has pushed up the price of AI chips, which can each cost tens of thousands of dollars, according to The New York Times. Nvidia's chips can cost up to four times more than competing AMD chips. OpenAI CEO Sam Altman made an appearance on stage with Su at the event to say that his company would use the latest AMD chips. "It's going to be an amazing thing," Altman said at the event, per CNBC. Su said that the MI355 chip processes 40% more tokens, or units of text that AI models use to understand and process knowledge, compared to chips from competitors like Nvidia. "MI355 can deliver up to 40% more tokens per dollar than competing solutions," Su said at the event. Despite Su's assertions that AMD chips are more efficient than Nvidia's, Nvidia leads the AI chip market with 85.2% of the market share compared to AMD's 14.3%, according to estimates shared with Bloomberg by global research firm IDC. Other companies place Nvidia even further ahead. Jon Peddie Research, a leading analysis firm, reported last week that Nvidia had 92% of the market for AI chips in the first quarter of 2025, compared to 8% for AMD. Nvidia and AMD are the industry leaders in advanced computer graphics chips, which are foundational to AI development. Nvidia CEO Jensen Huang said in September that his biggest worry was intense demand for AI technology, as tech companies pour billions of dollars into investments in AI infrastructure. Some of Nvidia's biggest clients, including Meta, Microsoft, Amazon, and Google, have each pledged tens of billions of dollars towards AI spending this year. Related: How Nvidia CEO Jensen Huang Transformed a Graphics Card Company Into an AI Giant: 'One of the Most Remarkable Business Pivots in History' In turn, Nvidia and AMD have also grown their bottom line.
[11]
AMD chief executive to unveil new AI chips
AMD CEO Lisa Su will unveil the MI400 AI chip series Thursday in San Jose, detailing plans to rival Nvidia with annual releases, improved software, and full AI systems. Recent acquisitions and hires underscore AMD's push for a stronger foothold in the AI chip market.Advanced Micro Devices CEO Lisa Su is expected to take the stage on Thursday at a company event in San Jose, California, to discuss the company's plans for the artificial intelligence chips and systems it designs. AMD has struggled to siphon off a portion of the quickly growing market for artificial intelligence chips from the dominant Nvidia. But the company has made a concerted effort to improve its software and produce a line of chips that rival Nvidia's performance. During Su's speech, which is set to begin at 9:30 am local time (1630 GMT), the CEO is expected to detail the company's forthcoming MI400 series of AI chips, set to launch next year. AMD has said it will match the annual release schedule that Nvidia began with its Blackwell series of chips. Thursday's event, called "Advancing AI," will focus on AMD's data center chips and other hardware. AMD completed the acquisition of server builder ZT Systems in March. As a result, AMD is expected to launch new complete AI systems, similar to several of the server-rack-sized products Nvidia produces. Santa Clara, California-based AMD has made a series of small acquisitions in recent weeks and has added talent to its chip design and AI software teams. Last week, AMD hired the team from chip startup Untether AI. On Wednesday AMD said it had hired several employees from generative AI startup Lamini, including the co-founder and CEO. AMD's software called ROCm has struggled to gain traction against Nvidia's CUDA, which is seen by some industry insiders as a key part of protecting the company's dominance. When AMD reported earnings in May, Su said that despite increasingly aggressive curbs on AI chip exports to China, AMD still expected strong double-digit growth from AI chips.
[12]
Game over for Nvidia? Lisa Su breaks silence as AMD chips leave Nvidia in the dust
Lisa Su, CEO of AMD, gave a big speech at the company's "Advancing AI" conference in San Jose, California. She said AMD wants to solve the world's big problems using high-performance computing. Su said billions of people use AMD tech every day on platforms like Microsoft 365, Facebook, Zoom, Netflix, Uber, Salesforce, and SAP, according to TheStreet. As per the Bloomberg report, she now thinks the AI chip market will go past $500 billion in the next 3 years, which is even more than her previous prediction. AMD's new chips, especially from the MI350 series, are faster than Nvidia's chips and are big upgrades over older AMD versions. The new MI355 chip is 35x faster than the older ones and started shipping earlier this month, as compiled by reports. AMD is still behind Nvidia in the AI chip game, but these new chips show AMD is ready to catch up. These AI chips are used to build and run AI tools, and now AMD wants a bigger share of that market, as per TheStreet. AMD claims its MI355 outperforms Nvidia's B200 and GB200 chips when running AI software, and is just as good or better at making AI code. Plus, AMD says their chips are much cheaper than Nvidia. Nvidia hasn't responded yet to these claims, as per a report by CNN. Just like Nvidia, AMD also can't sell its top AI chips to China because of U.S. trade rules, and they are trying to get the government to ease those rules. Even with all the hype, AMD's stock is down 4.1% in 2025 and 28% lower than last year, as per TheStreet. Mark Lipacis from Evercore ISI raised AMD's target price from $126 to $144 and said the company is doing well with its ROCm software and cloud AI workloads. He also said AMD's client list has grown from just Meta, Oracle, and Microsoft to now include OpenAI, xAI, Cohere, RedHat, and Humain. AMD now has better visibility in the data center GPU market, which could mean a higher stock valuation. According to Yahoo Finance, AMD's forward P/E ratio is under 30, while Nvidia's is under 34. Suji Desilva from Roth Capital raised AMD's target price to $150 from $125 and liked AMD's progress in AI processors, GPUs, networking, and rack systems, as stated in the report by CNN. He expects faster growth in 2026 thanks to the MI350-based Helios rack. Desilva also said that AI tools like inferencing and agentic AI are big growth drivers. Christopher Danely from Citi kept a neutral rating with a $120 target, saying AMD did well with the MI355X launch, but didn't share a revenue forecast, which is important for the stock, according to the report by TheStreet. Q1. Is AMD now better than Nvidia in AI chips? AMD says its new MI355 chip is faster and cheaper than Nvidia's top chips, but Nvidia hasn't responded yet. Q2. Why is AMD getting attention after Lisa Su's AI event? AMD launched powerful new AI chips and CEO Lisa Su said the AI market could go over $500 billion in 3 years.
[13]
AMD unveils AI server as OpenAI taps its newest chips
AMD has unveiled its upcoming MI400-based "Helios" AI server, set for 2026, to rival Nvidia's dominance. CEO Lisa Su stressed open collaboration, with support from OpenAI, Meta, and xAI. Su was joined onstage by OpenAI's Sam Altman, who said his company is using AMD's MI300X and MI450 chips.Advanced Micro Devices CEO Lisa Su on Thursday unveiled a new artificial intelligence server for 2026 that aims to challenge Nvidia's flagship offerings, as OpenAI's CEO said the ChatGPT creator would adopt AMD's latest chips. AMD shares were down about 2% after the company announced the news at a developer conference in San Jose, California, called "Advancing AI." Su took the stage to discuss the MI350 series and MI400 series AI chips that she said would compete with Nvidia's Blackwell line of processors The MI400 series of chips will be the basis of a new server called "Helios" that AMD plans to release next year. The move comes as the competition between Nvidia and other AI chip firms has shifted away from selling individual chips to selling servers packed with scores or even hundreds of processors, woven together with networking chips from the same company. During its keynote presentation, AMD said that many aspects of the Helios servers - such as the networking standards - would be made openly available and shared with competitors such as Intel. The move was a direct swipe at market leader Nvidia, which uses proprietary technology called NVLink to string together its chips but has recently started to license that technology as pressure mounts from rivals. "The future of AI is not going to be built by any one company or in a closed ecosystem. It's going to be shaped by open collaboration across the industry," Su said. Su was joined onstage by OpenAI's Sam Altman, who said his company is using AMD's MI300X and MI450 chips. "Our infrastructure ramp-up over the last year, and what we're looking at over the next year, have just been a crazy, crazy thing to watch," Altman said. During her speech, executives from billionaire Elon Musk-owned xAI, Meta Platforms and Oracle took to the stage to discuss their respective uses of AMD processors. Crusoe, a cloud provider that specializes in AI, told Reuters it is planning to buy $400 million of AMD's new chips. AMD's Su reiterated the company's product plans for the next year, which will roughly match the annual release schedule that Nvidia began with its Blackwell chips. AMD has struggled to siphon off a portion of the quickly growing market for AI chips from the dominant Nvidia. But the company has made a concerted effort to improve its software and produce a line of chips that rival Nvidia's performance. AMD completed the acquisition of server builder ZT Systems in March. As a result, AMD is expected to launch new complete AI systems, similar to several of the server-rack-sized products Nvidia produces. Santa Clara, California-based AMD has made a series of small acquisitions in recent weeks and has added talent to its chip design and AI software teams. At the event, Su said the company has made 25 strategic investments in the past year that were related to the company's AI plans. Last week, AMD hired the team from chip startup Untether AI. On Wednesday, AMD said it had hired several employees from generative AI startup Lamini, including the co-founder and CEO. AMD's software called ROCm has struggled to gain traction against Nvidia's CUDA, which is seen by some industry insiders as a key part of protecting the company's dominance. When AMD reported earnings in May, Su said that despite increasingly aggressive curbs on AI chip exports to China, AMD still expected strong double-digit growth from AI chips.
[14]
AMD sees AI chip market exceeding $500 billion by 2028
AMD CEO Lisa Su projects the AI processor market will surpass $500 billion by 2028, driven by inferencing demand. At its AI conference, AMD unveiled MI350 GPUs, emphasised open architecture, and highlighted growing adoption by top firms, including Reliance Jio and OpenAI.Advanced Micro Devices (AMD) Chief Executive Lisa Su has said that the chipmaker sees the artificial intelligence processor market topping $500 billion by 2028. The Silicon Valley company said the market is likely to grow at over 60 per cent annually to exceed USD 500 billion, from being a $45 billion opportunity in 2023. Addressing the company's flagship Advancing AI 2025 conference here on Thursday, Su said the growth will be led by the inferencing work, which is a shift from training. "What I can tell you based on everything that we see today is that that number is going to be exceeding USD 500 billion by 2028," Su said. The company unveiled the "MI350 Series GPUs (graphic processing units)" at the flagship event along with a host of other products. Su said opting for the newly launched chips may be beneficial for customers, claiming that they issue up to 40 per cent more tokens per dollar. It can be noted that Nvidia is generally considered an entrenched player in the GPUs market, and the demand for its chips is very high. There are waiting periods as well for the chips. In her address, Su said that seven of the 10 largest AI customers, including the largest Indian telco Reliance Jio, are deploying AMD Instinct Accelerators at present. AMD had last year announced the MI350 and announced that the same offering, which has displayed a four-times increase in compute as compared to the previous generation, is in production now and will be commercially available from the third quarter onwards. During her address, Su spoke with a slew of clients and partners, including OpenAI chief executive Sam Altman, and senior executives from other companies like Meta, Microsoft and Oracle. Su said the company is focused on open architecture as a fundamental tenet in its approach to serving the AI world, an aspect which is different from the approach adopted by Nvidia. The company also launched a developer cloud access programme, under which it will grant access to the developer community to try and use its offerings first-hand. Su said the company believes that innovation happens faster when something is thrown open for the developers, and backed it up by citing examples from history as well. It also showcased the 'Helios AI Rack Scale' solution, a fully integrated AI platform and announced that the same will be available from the next year onwards.
[15]
AMD's CEO Lisa Su Believes AI Data Center Accelerator Market Will Scale Up to $500 Billion By 2028, Driven By Demand For Inferencing
AMD's CEO has revealed massive optimism about the future of the data center segment, claiming that the demand for AI accelerators will only grow. AMD claims that there isn't enough compute available in the market to process all the evolving use cases of AI, claiming that the markets should anticipate the firm's AI/DC revenue to keep growing. At the Advancing AI keynote, AMD's CEO Lisa Su revealed that the data center accelerator market is growing at a whopping 60% CAGR, and this figure is expected to remain steady over the upcoming years, which puts the valuation of the AI accelerator segment at $500 billion, opening up countless opportunities, not just for AMD, but competitors like NVIDIA as well. AI has a lot more room to grow, and by the looks of it, several new prospects are emerging for Big Tech. The AI accelerator market will grow over time because artificial intelligence isn't just limited to model training now. The technology has adopted multiple use cases, which demand computational power, which AI GPUs drive. AMD's CEO says that AI has scaled beyond data centers and is used in cloud applications, edge AI, and client AI. All of these fields require accelerators to create the necessary computing power. As for which firm will capitalize on the accelerator demand, the competition is stepping up, especially after AMD's recent announcements. AMD has announced that they are specifically focusing on three different strategies to broaden its AI portfolio, notably creating leadership compute engines, an open ecosystem, and full-stack solutions, to ensure that its customers get everything by adopting Team Red's AI stack. AMD launched its latest Instinct MI350 AI lineup on the compute engine side, equipped with a brand-new CDNA 3 architecture based on TSMC's 3nm process node. They come with a massive HBM3E memory stack and feature up to 1400W of TDP with the flagship model, the MI355X. AMD says that they have reached parity with NVIDIA's Blackwell in terms of performance. Similarly, at the software ecosystem side of things, AMD revealed the new ROCm 7 software stack, including enhanced frameworks such as vLLM v1, llm-d, and SGLang, and also focuses on serving various optimizations. Here's what to see with ROCm 7: Team Red is shaping up to show an aggressive approach in the AI segment, and rivaling NVIDIA, which has maintained a stronghold over the market for several years now.
[16]
AMD Challenges Nvidia's AI Dominance With New Helios Server As OpenAI CEO Sam Altman Confirms ChatGPT Will Use Lisa Su-Led Tech Giant's Latest Chips: 'Future Of AI Is Not Going To Be Built By Any One Company' - Intel (NASDAQ:INTC), Advanced Micro Devices (NASDAQ:AMD)
On Thursday, Advanced Micro Devices, Inc. AMD unveiled a new server, signaling a direct challenge to Nvidia Corporation's NVDA. What Happened: At a development conference called "Advancing AI" in San Jose, AMD CEO Lisa Su introduced the Helios AI server, set to launch in 2026. Each Helios unit will contain 72 MI400 chips, directly rivaling Nvidia's NVL72 system. The shift reflects a change in competition among AI chipmakers like Nvidia, moving beyond selling standalone chips to offering complete server systems containing dozens or even hundreds of processors, all integrated with networking components from the same vendor. See Also: Robinhood Stock Is Falling Monday: What's Going On? "The future of AI is not going to be built by any one company or in a closed ecosystem. It's going to be shaped by open collaboration across the industry," Su said. AMD also said that Helios' networking standards would be openly shared with competitors like Intel Corporation INTC. OpenAI CEO Sam Altman joined Su onstage and said that ChatGPT would use AMD's MI450 chips, stating, "Our infrastructure ramp-up over the last year, and what we're looking at over the next year, have just been a crazy, crazy thing to watch." Executives from Meta Platforms Inc. META, xAI, and Oracle Corporation ORCL took the stage to highlight how they're leveraging AMD processors in their operations. Today's Best Finance Deals Why It's Important: Last month, Bank of America analyst Vivek Arya maintained a Buy rating on AMD with a $130 price target, citing the company's gains in server and PC CPU market share, growing AI opportunities and multi-year contracts in the Middle East. While Nvidia and custom chips are expected to lead the AI accelerator market, Arya sees AMD capturing a 3-4% share of the $300-$400 billion market. He highlighted AMD's strategic acquisitions, software improvements and recognition from companies like Oracle and xAI. Arya also forecasted up to $6.6 billion in additional revenue across key segments by 2027. Price Action: AMD shares have declined 1.77% year-to-date and are down 25.89% over the past 12 months. On Thursday, the stock fell 2.18%, closing at $118.50, according to Benzinga Pro. Benzinga's Edge Stock Rankings indicate AMD continues to show strong upward momentum in the short and medium term, but trends downward over the long term. Additional performance insights are available here. Read Next: Rocket Lab, AST SpaceMobile Shares Are Surging Monday: What's Fueling The Move? Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Photo Courtesy: jamesonwu1972 On Shutterstock.com AMDAdvanced Micro Devices Inc$118.20-2.43%Stock Score Locked: Edge Members Only Benzinga Rankings give you vital metrics on any stock - anytime. Unlock RankingsEdge RankingsMomentum26.53Growth97.07Quality75.91Value14.94Price TrendShortMediumLongOverviewINTCIntel Corp$20.65-0.14%METAMeta Platforms Inc$691.89-0.32%NVDANVIDIA Corp$144.501.17%ORCLOracle Corp$199.3713.0%Market News and Data brought to you by Benzinga APIs
[17]
AMD 'Serious AI Contender' With Sights On Over $500 Billion TAM, Billions In Revenue: Analyst - Advanced Micro Devices (NASDAQ:AMD)
Wall Street is holding firm on its ratings for Advanced Micro Devices AMD after Thursday's Advancing AI event in San Jose. AMD has garnered positive reviews from Wall Street following the AI event, with analysts from Bank of America Securities (BofA), Rosenblatt, and Benchmark Equity Research all reiterating Buy ratings. The event showcased AMD's aggressive push into the artificial intelligence market, highlighted by the official launch of its MI350 series of accelerators and a detailed roadmap for future generations, positioning the company as a credible competitor to Nvidia. Also Read: AMD Acquires Another Company To Expand AI Arsenal Rosenblatt analyst Kevin Cassidy maintained a Buy rating on AMD with a $200 price forecast. Benchmark analyst Cody Acree reiterated a Buy rating and a $170 price forecast. BofA Securities analyst Vivek Arya also maintained a Buy rating, with a price forecast of $130. A clear consensus among the analysts is that AMD is successfully executing on its AI strategy, with a rapidly maturing product ecosystem and growing customer adoption. The event featured endorsements from major industry players, including Microsoft Corp. MSFT, Meta META, OpenAI, and Oracle Corp. ORCL, bolstering confidence in AMD's ability to capture a significant share of the burgeoning AI market. CEO Lisa Su began the AI keynote by discussing the demand surge for inference compared to nine months ago, Kevin Cassidy noted. The clear and aggressive roadmap, with the MI350 series available now, the MI400 in 2026, and the MI500 in 2027, provides visibility and demonstrates AMD's commitment to rapid innovation, Cody Acree noted. "AMD made several significant announcements, the most notable in our opinion was the official launch of its much-anticipated Instinct MI350 Series, which is said to be in production and shipping to lead partners, with its volume revenue ramp expected to begin in the third quarter," Acree mentioned in the report. The company announced rack-scale systems for MI350, MI400, and MI500 accelerators, which combine Instinct GPUs, EPYC CPUs, Pensando NICs, and ROCm software to compete directly with Nvidia Corp's NVDA NVL72 systems. It further announced its latest ROCm 7 software platform, AMD's answer to Nvidia's CUDA platform. AMD is now estimating its AI accelerator TAM to exceed $500 billion, with a greater than 60% CAGR, versus its prior estimate of simply $500 billion. The company also announced that with its new Instinct MI350 Series, it has exceeded its five-year goal set in 2021 to improve the energy efficiency of AI training and high-performance computing nodes by 30x by 2025, ultimately delivering a 38x improvement. With this achievement, AMD has now set a new five-year goal to deliver a 20x increase in rack-scale energy efficiency by 2030 compared to 2024. With the acquisition of ZT Systems, AMD can assemble all components for rack infrastructure and is planning multiple generations. Both Benchmark's Cody Acree and Rosenblatt's Kevin Cassidy are projecting second-quarter revenue of $7.4 billion and earnings per share (EPS) of $0.47. Bofa Securities emphasized AMD's "continued execution in its AI roadmap" and the "customer/ecosystem proliferation." Arya noted that the new MI355X accelerator is roughly on par with Nvidia's B200, signaling a competitive performance landscape. While an official announcement from Amazon Web Services (AWS) was absent, Bofa Securities estimates "ongoing engagement" and is comfortable with its forecast for AMD to secure a low-to-mid single-digit percentage of the AI market share in 2025 and 2026. Cassidy was particularly impressed by AMD's commitment to an "open architecture," which the analyst believes will be a key differentiator. This strategy encompasses open software with ROCm, scalable networks, and support for various large language models (LLMs). Cassidy affirmed the view that AMD has firmly established itself as a serious contender in the AI compute space. The analyst highlighted AMD's alignment with a broad range of influential partners, including x.AI and its engagement with 40 sovereign AI initiatives, as a driver for "billions of profitable revenue." Cassidy recommends owning AMD shares for its leadership in AI CPUs and its "fast follower" position in AI accelerators. Price Action: AMD stock is trading lower by 0.96% to $117.36 at last check Friday. Read Next: AI Momentum Powers Nvidia, Arm And Chip Supply Chain In BofA Recap Photo via Shutterstock AMDAdvanced Micro Devices Inc$116.91-1.34%Stock Score Locked: Edge Members Only Benzinga Rankings give you vital metrics on any stock - anytime. Unlock RankingsEdge RankingsMomentum24.37Growth97.06Quality79.41Value15.45Price TrendShortMediumLongOverviewMETAMeta Platforms Inc$689.86-0.50%MSFTMicrosoft Corp$476.42-0.51%NVDANVIDIA Corp$142.29-1.87%ORCLOracle Corp$210.505.33%Market News and Data brought to you by Benzinga APIs
[18]
AMD Expected to Break NVIDIA's AI Monopoly With Next-Gen Instinct MI500 Accelerators & EPYC "Verano" CPUs, Set to Compete Against Vera Rubin Lineup
AMD's latest moves in the AI market are too big to ignore, as the firm has now decided to challenge NVIDIA with its highly capable offerings, expanding to rack-scale solutions. AMD's Upcoming Instinct MI500 Accelerators to Challenge NVIDIA's Powerful Rubin GPUs, Competing Head-to-Head For those unaware, at the Advancing AI event, AMD announced to the public what to expect regarding the company's AI roadmap, especially regarding the upcoming architectures being introduced by the firm. Over the past few years, NVIDIA has completely dominated the AI hardware segment, with its aggressive offerings, which have scaled effectively into rack-scale solutions, bringing in enormous compute performance. However, AMD is set to pace up competition with their newly announced Instinct MI500 and EPYC Verano CPUs, which reportedly use TSMC's N2P process. One of the more exciting elements of the Instinct MI500 is that it will be the company's answer to NVIDIA's Rubin architecture, and while the specifics of the series aren't disclosed yet, we do know that the accelerators will utilize TSMC's N2P process, along with the newest packaging methods, like the CoWoS-L. To support the latest accelerators, AMD is set to offer its next-gen EPYC "Verano" CPUs, which will also be built on the high-end 2nm process and will likely utilize either an upgraded version of Zen 6 or next-gen Zen 7 core architecture. So, NVIDIA's Vera Rubin now has a serious competitor, which will likely come at an equal performance parity, but this isn't the only trick AMD has up its sleeve. The company has revealed that they'll specifically focus on rack-scale AI solutions, part of which was announcing the new "Helio" AI server rack, which will be built on the Instinct MI400 AI accelerators, and EPYC 'Venice" CPUs, offering performance similar to the Rubin NVL144 AI racks. With AMD expanding its rack-scale offers, the company has certainly positioned its AI arsenal to be a direct and capable alternative to NVIDIA's. AMD has approached the AI market pretty aggressively with this year's Advancing AI event, and the company's extensive product roadmap shows that Team Red is ready to break the long-standing monopoly by NVIDIA. The only thing that would bother AMD is that it is currently on an annual product cadence, while its competitor is around a six-to-eight-month one, putting AMD in the backseat. However, the yearly cadence ensures that all of AMD's AI solutions are ready to be deployed into the markets without issues that occur in unrefined architectures.
[19]
AMD Calls OpenAI 'Early Design Partner' For MI450. Sam Altman Is 'Extremely Excited.'
Sam Altman's personal endorsement of AMD's upcoming data center GPU, which CEO Lisa Su says will best Nvidia's fastest AI chips next year, serves as a major boost for the company. Its rival, Nvidia, owes a good deal of the riches it has made over the past few years to OpenAI. AMD Lisa Su called OpenAI a customer and "very early design partner" for the chip designer's Instinct MI450 GPU that she said will usurp Nvidia's fastest AI chips next year. Near the end of her Advancing AI keynote in San Jose, Calif., on Thursday, Su disclosed that the ChatGPT behemoth has given the company "significant feedback on the requirements for next-generation training and inference" with regard to the MI450. [Related: The Biggest AMD Advancing AI News: From MI500 GPU TO ROCm Enterprise AI] She then brought out on stage OpenAI CEO and founder Sam Altman, who said he is "extremely excited for the MI450." "The memory architecture is great for inference. I believe it can be an incredible option for training as well," Altman told Su. "When you first started telling me what you're thinking about for the specs, I was like, there's no way. That just sounds totally crazy. It's too big. But it's really been so exciting to see you all get close to delivery on this. I think it's going to be an amazing thing," he added. Altman's personal endorsement of AMD's upcoming data center GPU, the first to power a server rack designed by AMD, served as a major boost for the company. Its rival, Nvidia, owes a good deal of the riches it has made over the past few years to OpenAI, which built ChatGPT using Nvidia GPUs and helped kick off insatiable demand for such products. AMD also received on-stage endorsements for its Instinct GPUs from executives at Microsoft, Meta, Cohere and Oracle Cloud Infrastructure on Thursday. As AMD revealed on Thursday, the MI400 series will pack 432 GB of HBM4 memory, which it said will give the GPU 50 percent more memory capacity and bandwidth than Nvidia's Vera Rubin platform while offering roughly the same compute performance. Seventy-two of AMD's MI450 GPUs will go into its "Helios" server rack, which Su said the company "designed from the grounds up as a rack-scale solution." "When Helios launches in 2026, we believe it'll set a new benchmark for AI at scale," she said. Altman said OpenAI is facing a substantial need for more computing power due to its shift to reasoning models, which has "put pressure on model efficiency and long, complex rollouts," in part because of the lengthy responses generated by such models. "We need tons of compute, tons of memory, tons of CPUs as well. And our infrastructure ramp over the last year and what we're looking over the next year has just been a crazy, crazy thing to watch," Altman said. Su said AMD has collaborated with OpenAI over the last few years, particularly working together in conjunction with Microsoft Azure, which has been an important cloud partner to both companies. That relationship eventually evolved to OpenAI becoming a design partner for AMD with what is now known MI450 GPU series. "One of the things that really sticks in my mind is when we sat down with your engineers, you were like, 'Whatever you do, just give us lots and lots of flexibility because things change so much. And that framework of working together has been phenomenal," she said.
[20]
AMD AI GPUs Can Potentially Haul In Between $10 Billion And $12 Billion In 2026
This is not investment advice. The author has no position in any of the stocks mentioned. Wccftech.com has a disclosure and ethics policy. AMD's Advancing AI event this week has attracted a number of positive commentaries from Wall Street analysts, despite the glaring absence of a new, major customer. To wit, AMD has now officially unveiled its MI350 series GPUs, which are based on its 4th-gen Instinct architecture and will become available in the third quarter of 2025. According to AMD, these GPUs are capable of delivering a 4x increase in AI compute power and a 35x increase in inferencing capacity. The company has launched the 7th iteration of the ROCm (Radeon Open Compute), an open-source platform that can be used to tweak the performance of AMD's GPUs. The chipmaker claims ROCm 7 delivers an average of 3.5x and 3x performance improvement in inference and training, respectively, over its previous iteration. AMD has also unveiled its Helios AI rack solution, which combines MI400 GPUs with Zen-6-based Venice CPUs and Vulcano NICs. This platform is expected to launch in 2026. Finally, AMD also teased the successor to Helios, which will launch in 2027 and leverage MI500 GPUs, Verano CPUs, and Vulcano NICs. This brings us to the crux of the matter. Cantor Fitzgerald analyst, C.J. Muse, now sees AMD hauling in $6 billion in AI revenues in H2 2025, following the availability of MI350 GPUs in Q3. The analyst goes on to note: "If AMD is able to scale its system-level solutions on time and without issues, like those seen at NVIDIA, we believe there could be considerable upside to our CY26 estimates for Data Center GPU (we currently model $8B but see upside potential for $10-12B)." Muse does concede that a lack of announcement vis-Ã -vis new customers likely played a role in the underwhelming price action around AMD shares in the aftermath of the AI-focused event. On the other hand, Bernstein analyst, Stacy Rasgon, was decidedly less bullish on AMD after the event, going so far as to note: "Similar to last year, not bad but no huge surprises ..." Rasgon points out the lack of announcement on new customers, but appreciates AMD's suggestion of a "material inflection in inference, likely growing at a >80% CAGR through the time period."
[21]
AMD turns to AI startups to inform chip, software design
SAN JOSE -- Advanced Micro Devices has forged close ties to a batch of artificial intelligence startups as part of the company's effort to bolster its software and forge superior chip designs. As AI companies seek alternatives to Nvidia's chips, AMD has begun to expand its plans to build a viable competing line of hardware, acquiring companies such as server maker ZT Systems in its quest to achieve that goal. But to build a successful line of chips also requires a powerful set of software to efficiently run the programs built by AI developers. AMD has acquired several small software companies in recent weeks in a bid to boost its talent, and it has been working to beef up its set of software, broadly known as ROCm. "This will be a very thoughtful, deliberate, multi-generational journey for us," said Vamsi Boppana, senior vice president of AI at AMD. AMD has committed to improve its ROCm and other software, which is a boon to customers such as AI enterprise startup Cohere, as it results in speedy changes and the addition of new features. Cohere is focused on building AI models that are tailored for large businesses versus the foundational AI models that companies like OpenAI and others target. AMD has made important strides in improving its software, Cohere CEO Aidan Gomez said in an interview with Reuters. Changing Cohere's software to run on AMD chips was a process that previously took weeks and now happens in only "days," Gomez said. Gomez declined to disclose exactly how much of Cohere's software relies on AMD chips but called it a "meaningful segment of our compute base" around the world. OpenAI has had significant influence on the design of the forthcoming MI450 series of AI chips, said Forrest Norrod, an executive vice president at AMD. AMD's MI400 series of chips will be the basis for a new server called "Helios" that the company plans to release next year. Nvidia too has engineered whole servers in part because AI computations require hundreds or thousands of chips strung together. OpenAI's Sam Altman appeared on stage at AMD's Thursday event in San Jose, and discussed the partnership between the two companies in broad terms. Norrod said that OpenAI's requests had a big influence on how AMD designed the MI450 series memory architecture and how the hardware can scale up to thousands of chips necessary to build and run AI applications. The ChatGPT creator also influenced what kinds of mathematical operations the chips are optimized for. "(OpenAI) has given us a lot of feedback that, I think, heavily informed our design," Norrod said.
[22]
AMD unveils AI server as OpenAI taps its newest chips
SAN JOSE -- Advanced Micro Devices CEO Lisa Su on Thursday unveiled a new artificial intelligence server for 2026 that aims to challenge Nvidia's flagship offerings as OpenAI's CEO said the ChatGPT creator would adopt AMD's latest chips. AMD shares were down about 2% after the company announced the news at a developer conference in San Jose, California, called "Advancing AI." Su took the stage to discuss the MI350 series and MI400 series AI chips that she said would compete with Nvidia's Blackwell line of processors The MI400 series of chips will be the basis of a new server called "Helios" that AMD plans to release next year. The move comes as the competition between Nvidia and other AI chip firms has shifted away from selling individual chips to selling servers packed with scores or even hundreds of processors, woven together with networking chips from the same company. During its keynote presentation, AMD said that many aspects of the Helios servers - such as the networking standards - would be made openly available and shared with competitors such as Intel. The move was a direct swipe at market leader Nvidia, which uses proprietary technology called NVLink to string together its chips but has recently started to license that technology as pressure mounts from rivals. "The future of AI is not going to be built by any one company or in a closed ecosystem. It's going to be shaped by open collaboration across the industry," Su said. Su was joined onstage by OpenAI's Sam Altman, who said his company is using AMD's MI300X and MI450 chips. "Our infrastructure ramp-up over the last year, and what we're looking at over the next year, have just been a crazy, crazy thing to watch," Altman said. During her speech, executives from billionaire Elon Musk-owned xAI, Meta Platforms and Oracle took to the stage to discuss their respective uses of AMD processors. Crusoe, a cloud provider that specializes in AI, told Reuters it is planning to buy $400 million of AMD's new chips. AMD's Su reiterated the company's product plans for the next year, which will roughly match the annual release schedule that Nvidia began with its Blackwell chips. AMD has struggled to siphon off a portion of the quickly growing market for AI chips from the dominant Nvidia. But the company has made a concerted effort to improve its software and produce a line of chips that rival Nvidia's performance. AMD completed the acquisition of server builder ZT Systems in March. As a result, AMD is expected to launch new complete AI systems, similar to several of the server-rack-sized products Nvidia produces. Santa Clara, California-based AMD has made a series of small acquisitions in recent weeks and has added talent to its chip design and AI software teams. At the event, Su said the company had acquired 25 companies in the past year that were related to the company's AI plans. Last week, AMD hired the team from chip startup Untether AI. On Wednesday, AMD said it had hired several employees from generative AI startup Lamini, including the co-founder and CEO. AMD's software called ROCm has struggled to gain traction against Nvidia's CUDA, which is seen by some industry insiders as a key part of protecting the company's dominance. When AMD reported earnings in May, Su said that despite increasingly aggressive curbs on AI chip exports to China, AMD still expected strong double-digit growth from AI chips. ---
[23]
AMD unveils new AI accelerators and rack-scale solutions By Investing.com
SANTA CLARA - Advanced Micro Devices, Inc. (NASDAQ:AMD), a $193.56 billion market cap semiconductor giant with 21.71% revenue growth in the last twelve months, introduced its new Instinct MI350 Series accelerators and previewed its next-generation "Helios" AI rack design at its Advancing AI event on Thursday. According to InvestingPro analysis, AMD maintains its position as a prominent player in the Semiconductors industry, with robust financial health scores and significant growth potential. The company announced that the MI350 Series GPUs deliver a 4x generation-on-generation AI compute increase and a 35x generational leap in inferencing capabilities. The MI355X variant reportedly generates up to 40% more tokens-per-dollar compared to competing solutions. AMD also demonstrated its open-standards rack-scale AI infrastructure, which is already being deployed by Oracle Cloud Infrastructure. The company said this infrastructure, featuring MI350 Series accelerators, 5th Gen EPYC processors, and Pensando Pollara NICs, will be broadly available in the second half of 2025. Looking ahead, AMD previewed its next-generation AI rack called Helios, which will incorporate MI400 Series GPUs expected to deliver up to 10x more performance for inference on Mixture of Experts models compared to the previous generation. The company announced that seven of the 10 largest model builders and AI companies are running production workloads on Instinct accelerators, including Meta, OpenAI, Microsoft, and xAI. With a strong gross profit margin of 53.58% and healthy current ratio of 2.8, AMD appears well-positioned to support its expanding AI initiatives. For deeper insights into AMD's financial metrics and 13 additional ProTips, consider exploring InvestingPro's comprehensive analysis tools. AMD also released ROCm 7, the latest version of its open-source AI software stack, and announced the broad availability of the AMD Developer Cloud for the global developer community. In terms of energy efficiency, AMD reported that the Instinct MI350 Series exceeded the company's five-year goal to improve AI training and high-performance computing node efficiency by 30x, ultimately delivering a 38x improvement. The company set a new 2030 goal to deliver a 20x increase in rack-scale energy efficiency from a 2024 base year. This information is based on a press release statement from AMD. The company's stock, which has shown significant volatility with a beta of 1.99, currently trades at a premium to its InvestingPro Fair Value, reflecting market optimism about its AI initiatives. Investors seeking detailed valuation analysis and expert insights can access AMD's full Pro Research Report, available exclusively on InvestingPro, along with reports for 1,400+ other top US stocks. In other recent news, Advanced Micro Devices (AMD) has unveiled a new lineup of AI chips, the MI350 and MI400 series, aimed at challenging Nvidia's dominance in the AI chip market. This development aligns with AMD's strategy to capture a larger share of the expanding AI sector. The company has also completed its acquisition of ZT Systems, positioning itself to offer comprehensive AI systems similar to Nvidia's offerings. KeyBanc has maintained its Sector Weight rating on AMD stock, adjusting earnings estimates due to anticipated charges, including a significant write-off of the MI308. Cantor Fitzgerald has raised its price target for AMD to $140, citing optimism around the company's AI prospects and upcoming product releases. Similarly, Citi has increased its price target to $120, maintaining a Neutral rating, and anticipates AMD to highlight new customer partnerships at its upcoming AI event. Wells Fargo has reiterated an Overweight rating with a $120 price target, noting AMD's strategic partnership with Sanmina and the focus on rack-scale AI solutions. These recent developments reflect AMD's efforts to strengthen its position in the AI market and its ongoing strategic initiatives.
[24]
AMD gains on Nvidia? Lisa Su reveals new chips in heated AI inference race By Investing.com
Investing.com -- Advanced Micro Devices Inc (NASDAQ:AMD) made an aggressive bid for dominance in AI inference at its Advancing AI event Thursday, unveiling new chips that directly challenge NVIDIA Corporation's (NASDAQ:NVDA) supremacy in the data center GPU market. AMD claims its latest Instinct MI355X accelerators surpass Nvidia's most advanced Blackwell GPUs in inference performance while offering a significant cost advantage, a critical selling point as hyperscalers look to scale generative AI services affordably. The MI355X, which has just begun volume shipments, delivers a 35-fold generational leap in inference performance and, according to AMD, up to 40% more tokens-per-dollar compared to Nvidia's flagship chips. That performance boost, coupled with lower power consumption, is designed to help AMD undercut Nvidia's offerings in total cost of ownership at a time when major AI customers are re-evaluating procurement strategies. "What has really changed is the demand for inference has grown significantly," AMD CEO Lisa Su said at the event in San Jose. "It says that we have really strong hardware, which we always knew, but it also shows that the open software frameworks have made tremendous progress." AMD's argument hinges not just on silicon performance, but on architecture and economics. By pairing its GPUs with its own CPUs and networking chips inside open "rack-scale" systems, branded Helios, AMD is building full-stack solutions to rival Nvidia's proprietary end-to-end ecosystem. These systems, launching next year with the MI400 series, were designed to enable hyperscale inference clusters while reducing energy and infrastructure costs. Su highlighted how companies like OpenAI, Meta Platforms Inc (NASDAQ:META), and Microsoft Corporation (NASDAQ:MSFT) are now running inference workloads on AMD chips, with OpenAI CEO Sam Altman confirming a close partnership on infrastructure innovation. "It's gonna be an amazing thing," Altman said during the event. "When you first started telling me about the specs, I was like, there's no way, that just sounds totally crazy." Oracle Corporation (NYSE:ORCL) Cloud Infrastructure intends to offer massive clusters of AMD chips, with plans to deploy up to 131,072 MI355X GPUs, positioning AMD as a scalable alternative to Nvidia's tightly integrated, and often more expensive, solutions. AMD officials emphasized the cost benefits, asserting that customers could achieve double-digit percent savings on power and capital expenditures when compared with Nvidia's GPUs. Despite the positive news, AMD shares were down roughly 2% ahead of market close. Wall Street remains cautious, but AMD's moves suggest it is committed to challenging Nvidia's leadership not only with performance parity, but also with a differentiated value and systems strategy. While Nvidia still commands more than 90% of the data center AI chip market, AMD's targeted push into inference, where workloads demand high efficiency and lower costs, marks a strategic front in the battle for AI dominance. With generative AI models driving a surge in inference demand across enterprises, AMD is betting that performance per dollar will matter more than ever.
[25]
AMD turns to AI startups to inform chip, software design
SAN JOSE (Reuters) -Advanced Micro Devices has forged close ties to a batch of artificial intelligence startups as part of the company's effort to bolster its software and forge superior chip designs. As AI companies seek alternatives to Nvidia's chips, AMD has begun to expand its plans to build a viable competing line of hardware, acquiring companies such as server maker ZT Systems in its quest to achieve that goal. But to build a successful line of chips also requires a powerful set of software to efficiently run the programs built by AI developers. AMD has acquired several small software companies in recent weeks in a bid to boost its talent, and it has been working to beef up its set of software, broadly known as ROCm. "This will be a very thoughtful, deliberate, multi-generational journey for us," said Vamsi Boppana, senior vice president of AI at AMD. AMD has committed to improve its ROCm and other software, which is a boon to customers such as AI enterprise startup Cohere, as it results in speedy changes and the addition of new features. Cohere is focused on building AI models that are tailored for large businesses versus the foundational AI models that companies like OpenAI and others target. AMD has made important strides in improving its software, Cohere CEO Aidan Gomez said in an interview with Reuters. Changing Cohere's software to run on AMD chips was a process that previously took weeks and now happens in only "days," Gomez said. Gomez declined to disclose exactly how much of Cohere's software relies on AMD chips but called it a "meaningful segment of our compute base" around the world. OPENAI INFLUENCE OpenAI has had significant influence on the design of the forthcoming MI450 series of AI chips, said Forrest Norrod, an executive vice president at AMD. AMD's MI400 series of chips will be the basis for a new server called "Helios" that the company plans to release next year. Nvidia too has engineered whole servers in part because AI computations require hundreds or thousands of chips strung together. OpenAI's Sam Altman appeared on stage at AMD's Thursday event in San Jose, and discussed the partnership between the two companies in broad terms. Norrod said that OpenAI's requests had a big influence on how AMD designed the MI450 series memory architecture and how the hardware can scale up to thousands of chips necessary to build and run AI applications. The ChatGPT creator also influenced what kinds of mathematical operations the chips are optimized for. "(OpenAI) has given us a lot of feedback that, I think, heavily informed our design," Norrod said. (Reporting by Max A. Cherney in San JoseEditing by Shri Navaratnam)
Share
Copy Link
AMD reveals its new Instinct MI350 and MI400 series AI chips, along with a comprehensive AI roadmap spanning GPUs, networking, software, and rack architectures, in a bid to compete with Nvidia in the rapidly growing AI chip market.
Advanced Micro Devices (AMD) has made a significant leap in the artificial intelligence (AI) chip market, unveiling its next-generation AI chips and a comprehensive roadmap that spans GPUs, networking, software, and rack architectures. This move is seen as a direct challenge to Nvidia's dominance in the rapidly growing AI chip sector 12.
Source: Reuters
At the heart of AMD's announcement are the Instinct MI350 and MI355X GPU-based chips, built on a 3nm process with up to 288 MB of HBM3E memory 3. AMD claims these chips not only match but potentially surpass Nvidia's Blackwell B200 performance in certain benchmarks. The company boasts a 4x improvement in AI performance and a staggering 35x generational improvement in inferencing compared to their previous generation 5.
AMD also provided a glimpse into the future with the MI400 series, scheduled for release in 2026, which will offer up to 432 GB of HBM4 memory and enhanced bandwidth capabilities crucial for running larger AI models 4.
Source: TechSpot
AMD's strategy extends beyond just chips. The company introduced:
AMD has also focused on software development, releasing version 7 of its open-source ROCm software stack 4. The company has forged partnerships with major AI players, including OpenAI, Meta, Microsoft, and Oracle, with CEO Lisa Su stating that seven out of ten of the largest model builders and AI companies use AMD's Instinct accelerators 15.
Source: Analytics India Magazine
Despite these advancements, AMD's market capitalization remains significantly lower than Nvidia's, standing at $192.14 billion as of the announcement 5. However, the company's aggressive roadmap and partnerships with key AI players signal a strong intent to capture a larger share of the AI chip market, which AMD expects to exceed $500 billion by 2028 3.
AMD has also emphasized its commitment to energy efficiency, claiming to have exceeded its five-year goal of improving AI training and high-performance computing node efficiency by 38x, surpassing the initial 30x target 5.
As the AI chip war heats up, AMD's comprehensive approach to AI infrastructure and its partnerships with major tech companies position it as a serious contender in the market. The coming years will likely see intense competition between AMD and Nvidia, potentially driving further innovation in AI chip technology.
Summarized by
Navi
[2]
Ilya Sutskever, co-founder of Safe Superintelligence (SSI), assumes the role of CEO following the departure of Daniel Gross to Meta. The move highlights the intensifying competition for top AI talent among tech giants.
6 Sources
Business and Economy
2 hrs ago
6 Sources
Business and Economy
2 hrs ago
Google's advanced AI video generation tool, Veo 3, is now available worldwide to Gemini app 'Pro' subscribers, including in India. The tool can create 8-second videos with audio, dialogue, and realistic lip-syncing.
7 Sources
Technology
18 hrs ago
7 Sources
Technology
18 hrs ago
A federal court has upheld an order requiring OpenAI to indefinitely retain all ChatGPT logs, including deleted chats, as part of a copyright infringement lawsuit by The New York Times and other news organizations. This decision raises significant privacy concerns and sets a precedent in AI-related litigation.
3 Sources
Policy and Regulation
10 hrs ago
3 Sources
Policy and Regulation
10 hrs ago
Microsoft's Xbox division faces massive layoffs and game cancellations amid record profits, with AI integration suspected as a key factor in the restructuring.
4 Sources
Business and Economy
10 hrs ago
4 Sources
Business and Economy
10 hrs ago
Google's AI video generation tool, Veo 3, has been linked to a surge of racist and antisemitic content on TikTok, raising concerns about AI safety and content moderation on social media platforms.
5 Sources
Technology
18 hrs ago
5 Sources
Technology
18 hrs ago