18 Sources
[1]
AMD turns to AI startups to inform chip, software design
SAN JOSE, June 13 (Reuters) - Advanced Micro Devices has forged close ties to a batch of artificial intelligence startups as part of the company's effort to bolster its software and forge superior chip designs. As AI companies seek alternatives to Nvidia's chips, AMD has begun to expand its plans to build a viable competing line of hardware, acquiring companies such as server maker ZT Systems in its quest to achieve that goal. But to build a successful line of chips also requires a powerful set of software to efficiently run the programs built by AI developers. AMD has acquired several small software companies in recent weeks in a bid to boost its talent, and it has been working to beef up its set of software, broadly known as ROCm. "This will be a very thoughtful, deliberate, multi-generational journey for us," said Vamsi Boppana, senior vice president of AI at AMD. AMD has committed to improve its ROCm and other software, which is a boon to customers such as AI enterprise startup Cohere, as it results in speedy changes and the addition of new features. Cohere is focused on building AI models that are tailored for large businesses versus the foundational AI models that companies like OpenAI and others target. AMD has made important strides in improving its software, Cohere CEO Aidan Gomez said in an interview with Reuters. Changing Cohere's software to run on AMD chips was a process that previously took weeks and now happens in only "days," Gomez said. Gomez declined to disclose exactly how much of Cohere's software relies on AMD chips but called it a "meaningful segment of our compute base" around the world. OPENAI INFLUENCE OpenAI has had significant influence on the design of the forthcoming MI450 series of AI chips, said Forrest Norrod, an executive vice president at AMD. AMD's MI400 series of chips will be the basis for a new server called "Helios" that the company plans to release next year. Nvidia too has engineered whole servers in part because AI computations require hundreds or thousands of chips strung together. OpenAI's Sam Altman appeared on stage at AMD's Thursday event in San Jose, and discussed the partnership between the two companies in broad terms. Norrod said that OpenAI's requests had a big influence on how AMD designed the MI450 series memory architecture and how the hardware can scale up to thousands of chips necessary to build and run AI applications. The ChatGPT creator also influenced what kinds of mathematical operations the chips are optimized for. "(OpenAI) has given us a lot of feedback that, I think, heavily informed our design," Norrod said. Reporting by Max A. Cherney in San Jose Editing by Shri Navaratnam Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Artificial Intelligence Max A. Cherney Thomson Reuters Max A. Cherney is a correspondent for Reuters based in San Francisco, where he reports on the semiconductor industry and artificial intelligence. He joined Reuters in 2023 and has previously worked for Barron's magazine and its sister publication, MarketWatch. Cherney graduated from Trent University with a degree in history.
[2]
AMD CEO unveils new AI chips
SAN JOSE, June 12 (Reuters) - Advanced Micro Devices (AMD.O), opens new tab CEO Lisa Su showed off a new crop of artificial intelligence chips that will compete with the flagship processors designed by Nvidia (NVDA.O), opens new tab. AMD shares were roughly flat in early afternoon trading. Su took the stage to discuss the MI350 series and MI400 series AI chips that she said would compete with Nvidia's Blackwell line of processors. During her speech, executives from X.AI, Meta Platforms and Oracle took to the stage to discuss their respective uses of AMD processors. AMD's Su reiterated the company's product plans for the next year, which will roughly match the annual release schedule that Nvidia began with its Blackwell chips. AMD has struggled to siphon off a portion of the quickly growing market for artificial intelligence chips from the dominant Nvidia (NVDA.O), opens new tab. But the company has made a concerted effort to improve its software and produce a line of chips that rival Nvidia's performance. Thursday's event, called "Advancing AI," will focus on AMD's data center chips and other hardware. AMD completed the acquisition of server builder ZT Systems in March. As a result, AMD is expected to launch new complete AI systems, similar to several of the server-rack-sized products Nvidia produces. Santa Clara, California-based AMD has made a series of small acquisitions in recent weeks and has added talent to its chip design and AI software teams. At the event, Su said the company had acquired 25 companies in the past year that were related to the company's AI plans. Last week, AMD hired the team from chip startup Untether AI. On Wednesday, AMD said it had hired several employees from generative AI startup Lamini, including the co-founder and CEO. AMD's software called ROCm has struggled to gain traction against Nvidia's CUDA, which is seen by some industry insiders as a key part of protecting the company's dominance. When AMD reported earnings in May, Su said that despite increasingly aggressive curbs on AI chip exports to China, AMD still expected strong double-digit growth from AI chips. Reporting by Max A. Cherney in San Jose, Stephen Nellis in San Francisco and Arsheeya Bajwa in Bengaluru; Editing by Leslie Adler and Marguerita Choy Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Disrupted Max A. Cherney Thomson Reuters Max A. Cherney is a correspondent for Reuters based in San Francisco, where he reports on the semiconductor industry and artificial intelligence. He joined Reuters in 2023 and has previously worked for Barron's magazine and its sister publication, MarketWatch. Cherney graduated from Trent University with a degree in history.
[3]
AMD reveals next-generation AI chips with OpenAI CEO Sam Altman
OpenAI CEO Sam Altman poses during the Artificial Intelligence (AI) Action Summit, at the Grand Palais, in Paris, on February 11, 2025. AMD's rack-scale technology also enables its latest chips to compete with Nvidia's Blackwell chips, which already come in configurations with 72 graphics-processing units stitched together. Nvidia is AMD's primary and only rival in big data center GPUs for developing and deploying AI applications. OpenAI -- a notable Nvidia customer -- has been giving AMD feedback on its MI400 roadmap, the chip company said. With the MI400 chips and this year's MI355X chips, AMD is planning to compete against rival Nvidia on price, with a company executive telling reporters on Wednesday that the chips will cost less to operate thanks to lower power consumption, and that AMD is undercutting Nvidia with "aggressive" prices. So far, Nvidia has dominated the market for data center GPUs, partially because it was the first company to develop the kind of software needed for AI developers to take advantage of chips originally designed to display graphics for 3D games. Over the past decade, before the AI boom, AMD focused on competing against Intel in server CPUs. Su said that AMD's MI355X can outperform Nvidia's Blackwell chips, despite Nvidia using its "proprietary" CUDA software. "It says that we have really strong hardware, which we always knew, but it also shows that the open software frameworks have made tremendous progress," Su said. AMD shares are flat so far in 2025, signaling that Wall Street doesn't yet see it as a major threat to Nvidia's dominance. Andrew Dieckmann, AMD's general manger for data center GPUs, said Wednesday that AMD's AI chips would cost less to operate and less to acquire. "Across the board, there is a meaningful cost of acquisition delta that we then layer on our performance competitive advantage on top of, so significant double-digit percentage savings," Dieckmann said. Over the next few years, big cloud companies and countries alike are poised to spend hundreds of billions of dollars to build new data center clusters around GPUs in order to accelerate the development of cutting-edge AI models. That includes $300 billion this year alone in planned capital expenditures from megacap technology companies. AMD is expecting the total market for AI chips to exceed $500 billion by 2028, although it hasn't said how much of that market it can claim -- Nvidia has over 90% of the market currently, according to analyst estimates. Both companies have committed to releasing new AI chips on an annual basis, as opposed to a biannual basis, emphasizing how fierce competition has become and how important bleeding-edge AI chip technology is for companies like Microsoft, Oracle and Amazon. AMD has bought or invested in 25 AI companies in the past year, Su said, including the purchase of ZT Systems earlier this year, a server maker that developed the technology AMD needed to build its rack-sized systems. "These AI systems are getting super complicated, and full-stack solutions are really critical," Su said.
[4]
AMD's new AI roadmap spans GPUs, networking, software, and rack architectures
Editor's take: In the ever-evolving world of GenAI, important advances are happening across chips, software, models, networking, and systems that combine all these elements. That's what makes it so hard to keep up with the latest AI developments. The difficulty factor becomes even greater if you're a vendor building these kinds of products and working not only to keep up, but to drive those advances forward. Toss in a competitor that's virtually cornered the market - and in the process, grown into one of the world's most valuable companies - and, well, things can appear pretty challenging. That's the situation AMD found itself in as it entered its latest Advancing AI event. But rather than letting these potential roadblocks deter them, AMD made it clear that they are inspired to expand their vision, their range of offerings, and the pace at which they are delivering new products. From unveiling their Instinct MI400 GPU accelerators and next-generation "Vulcan" networking chips, to version 7 of their ROCm software and the debut of a new Helios Rack architecture. AMD highlighted all the key aspects of AI infrastructure and GenAI-powered solutions. In fact, one of the first takeaways from the event was how far the company's reach now extends across all the critical parts of the AI ecosystem. As expected, there was a great deal of focus on the official launch of the Instinct MI350 and higher-wattage, faster-performing MI355X GPU-based chips, which AMD had previously announced last year. Both are built on a 3nm process and feature up to 288 MB of HBM3E memory and can be used in both liquid-cooled and air-cooled designs. According to AMD's testing, these chips not only match Nvidia's Blackwell 200 performance levels, but even surpass them on certain benchmarks. In particular, AMD emphasized improvements in inferencing speed (over 3x faster than the previous generation), as well as cost per token (up to 40% more tokens per dollar vs. the B200, according to AMD). AMD also provided more details on its next-generation MI400, scheduled for release next year, and even teased the MI500 for 2027. The MI400 will offer up to 432 GB of HBM4 memory, memory bandwidth of 19.6 TB/sec, and 300 GB/sec of scale-out memory bandwidth - all of which will be important for both running larger models and assembling the kinds of large rack systems expected to be needed for next-generation LLMs. Some of the more surprising announcements from the event focused on networking. First was a discussion of AMD's next-generation Pensando networking chip and a network interface card they're calling the AMD Pensando Pollara 400 AI NIC, which the company claims is the industry's first shipping AI-powered network card. AMD is part of the Ultra Ethernet Consortium and, not surprisingly, the Pollara 400 uses the Ultra Ethernet standard. It reportedly offers 20% improvements in speed and 20x more capacity to scale than competitive cards using InfiniBand technology. As with its GPUs, AMD also announced its next-generation networking chip, codenamed "Vulcano," designed for large AI clusters. It will offer 800 GB/sec network speeds and up to 8x the scale-out performance for large groups of GPUs when released in 2026. AMD also touted the new open-source Ultra Accelerator Link (UAL) standard for GPU-to-GPU and other chip-to-chip connections. A direct answer to Nvidia's NVLink technology, UAL is based on AMD's Infinity Fabric and matches the performance of Nvidia's technology while providing more flexibility by enabling connections between any company's GPUs and CPUs. Putting all of these various elements together, arguably the biggest hardware news - both literally and figuratively - from the Advancing AI event was AMD's new rack architecture designs. Large cloud providers, neocloud operators, and even some sophisticated enterprises have been moving toward rack-based complete solutions for their AI infrastructure, so it was not surprising to see AMD make these announcements - particularly after acquiring expertise from ZT Systems, a company that designs rack computing systems, earlier this year. Still, it was an important step to show a complete competitive offering with even more advanced capabilities against Nvidia's NVL72 and to demonstrate how all the pieces of AMD's silicon solutions can work together. In addition to showing systems based on their current 2025 chip offerings, AMD also unveiled their Helios rack architecture, coming in 2026. It will leverage a complete suite of AMD chips, including next-generation Epyc CPUs (codenamed Venice), Instinct MI400 GPUs, and the Vulcano networking chip. What's important about Helios is that it demonstrates AMD will not only be on equal footing with next-generation Vera Rubin-based rack systems Nvidia has announced for next year, but may even surpass them. In fact, AMD arguably took a page from the recent Nvidia playbook by offering a multi-year preview of its silicon and rack-architecture roadmaps, making it clear that they are not resting on their laurels but moving aggressively forward with critical technology developments. Importantly, they did so while touting what they expect will be equivalent or better performance from these new options. (Of course, all of these are based on estimates of expected performance, which could - and likely will - change for both companies.) Regardless of what the final numbers prove to be, the bigger point is that AMD is clearly confident enough in its current and future product roadmaps to take on the toughest competition. That says a lot. As mentioned earlier, the key software story for AMD was the release of version 7 of its open-source ROCm software stack. The company highlighted multiple performance improvements on inferencing workloads, as well as increased day-zero compatibility with many of the most popular LLMs. They also discussed ongoing work with other critical AI software frameworks and development tools. There was a particular focus on enabling enterprises to use ROCm for their own in-house development efforts through ROCm Enterprise AI. On their own, some of these changes are modest, but collectively they show clear software momentum that AMD has been building. Strategically, this is critical, because competition against Nvidia's CUDA software stack continues to be the biggest challenge AMD faces in convincing organizations to adopt its solutions. It will be interesting to see how AMD integrates some of its recent AI software-related acquisitions - including Lamini, Brium, and Untether AI - into its range of software offerings. One of the more surprising bits of software news from AMD was the integration of ROCm support into Windows and the Windows ML AI software stack. This helps make Windows a more useful platform for AI developers and potentially opens up new opportunities to better leverage AMD GPUs and NPUs for on-device AI acceleration. Speaking of developers, AMD also used the event to announce its AMD Developer Cloud for software designers, which gives them a free resource (at least initially, via free cloud credits) to access MI300-based infrastructure and build applications with ROCm-based software tools. Again, a small but critically important step in demonstrating how the company is working to expand its influence across the AI software development ecosystem. Clearly, the collective actions the company is taking are starting to make an impact. AMD welcomed a broad range of customers leveraging its solutions in a big way, including OpenAI, Microsoft, Oracle Cloud, Humane, Meta, xAI, and many more. They also talked about all their work in creating sovereign AI deployments in countries around the world. And ultimately, as the company started the keynote with, it's all about continuing to build trust among its customers, partners and potential new clients. AMD has the benefit of being an extremely strong alternative to Nvidia - one that many in the market want to see increase its presence for competitive balance. Based on what was announced at Advancing AI, it looks like AMD is moving in the right direction. Bob O'Donnell is the founder and chief analyst of TECHnalysis Research, LLC a technology consulting firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on X @bobodtech
[5]
AI chip war heats up as AMD unveils its Nvidia Blackwell competitor
AMD claims it has exceeded its energy efficiency goals, lays out bolder goals AMD has unveiled its Instinct MI350 Series GPUs, promising a staggering 4x improvement to AI performance compared with the previous generation chips - enough to have Nvidia worried about the market dominance of its Blackwell chips. Company CEO Lisa Su also revealed details of the Helios AI Rack, which is to be built on next-generation Instinct MI400 Series GPUs as well as AMD EPYV Venice CPUs and AMD Pensando Vulcano NICs. The news came at AMD's Advancing AI 2025 conference, together with a series of other hardware, software and AI announcements. Besides the 4x improvement to AI performance, AMD also boasts an eyewatering 35x generational improvement in inferencing as well as price-performance gains, unlocking 40% more tokens-per-dollar compared to its key like-for-like rival, the Nvidia B200. Despite Nvidia's market dominance, AMD proudly claims that seven in 10 of the largest model builders and Al companies use its Instinct accelerators, including Meta, OpenAI, Microsoft and xAI. MI300X has been deployed for Llama 3 and 4 inferencing with Meta and proprietary and open-source models with Azure, among others. Besides performance, AMD is also honing in on its environmental goals, claiming that its MI350 Series GPUs exceeded the five-year organizational goal to improve the energy efficiency of AI training and high-performance computing nodes by 30x - by reaching a figure of 38x. By 2030, the company also wants to increase rack-scale energy efficiency by 20x compared with 2024, and it already predicts a 95% reduction in electricity for typical AI model training. Looking ahead, Instinct MI400 Series GPUs are expected to deliver up to 10x more performance running inference on Mixture of Experts models. Despite the bold claims, AMD's market cap remains considerably lower than Nvidia's, reaching $192.14 billion at press time. "AMD is driving AI innovation at an unprecedented pace, highlighted by the launch of our AMD Instinct MI350 series accelerators, advances in our next generation AMD 'Helios' rack-scale solutions, and growing momentum for our ROCm open software stack," said Su. "We are entering the next phase of AI, driven by open standards, shared innovation and AMD's expanding leadership across a broad ecosystem of hardware and software partners who are collaborating to define the future of AI."
[6]
AMD's Su-premacy Begins | AIM
This year, AMD's Advancing AI event was on another level. The company made it clear it's no longer afraid of NVIDIA. It introduced the new Instinct MI350 Series GPUs, built on the CDNA 4 architecture, promising a fourfold generational improvement in AI compute and a 35x leap in inferencing performance. It also launched ROCm 7.0, its open software stack for GPU computing and previewed the upcoming MI400 Series and Helios AI rack infrastructure. The company said that MI350X and MI355X GPUs feature 288GB of HBM3E memory and offer up to 8TB/s of memory bandwidth. "MI355 delivers 35x higher throughput when running at ultra-low latencies, which is required for some real-time applications like code completion, simultaneous translation, and transcription," said AMD CEO Lisa Su. Su said that models like Llama 4 Maverick and DeepSeek R1 have seen triple the tokens per second on the MI355 compared to the previous generation. This leads to faster responses and higher user throughput. "The MI355 offers up to 40% more tokens per dollar compared to NVIDIA B200," she added. Each MI355X platform can deliver up to 161 PFLOPs of FP4 performance using structured sparsity. The series supports both air-cooled (64 GPUs) and direct liquid-cooled (128 GPUs) configurations, offering up to 2.6 exaFLOPs of FP4/FP6 compute. The Instinct MI400 Series, expected in 2026, will feature up to 432GB of HBM4 memory and 19.6TB/s of bandwidth. It is set to deliver 40 PFLOPs of FP4 and 20 PFLOPs of FP8 performance. Speaking about the company's open-source software ROCm, Vamsi Boppana, senior vice president of AMD's artificial intelligence group, said it now powers some of the largest AI platforms in the world, supporting major models like Llama and DeepSeek from day one, and delivering over 3.5x inference gains in the upcoming ROCm 7 release. He added that frequent updates, support for FP4 data types, and new algorithms like FAv3 are helping ROCm deliver better performance and push open-source frameworks like vLLM and SGLang ahead of closed-source options. "With over 1.8 million Hugging Face models running out of the box, industry benchmarks now in play, ROCm is not just catching up -- it's leading the open AI revolution," he added. AMD is working with leading AI companies, including Meta, OpenAI, xAI, Oracle, Microsoft, Cohere, HUMAIN, Red Hat, Astera Labs and Marvell. Su said the company expects the market for AI processors to exceed $500 billion by 2028. The event, which took place in San Jose, California, also saw OpenAI CEO Sam Altman sharing the stage with Su. "We are working closely with AMD on infrastructure for research and production. Our GPT models are running on MI300X in Azure, and we're deeply engaged in design efforts on the MI400 Series," Altman said. On the other hand, Meta said its Llama 3 and Llama 4 inference workloads are running on MI300X and that it expects further improvements from the MI350 and MI400 Series. Oracle Cloud Infrastructure is among the first to adopt the new system, with plans to offer zettascale AI clusters comprising up to 131,072 MI355X GPUs. Microsoft confirmed that proprietary and open-source models are now running in production on Azure using the MI300X. Cohere said its Command models use the MI300X for enterprise inference. HUMAIN announced a partnership with AMD to build a scalable and cost-efficient AI platform using AMD's full compute portfolio. AMD announced its new open standard rack-scale infrastructure to meet the rising demands of agentic AI workloads, launching solutions that integrate Instinct MI350 GPUs, 5th Gen EPYC CPUs, and Pensando Pollara NICs. "We have taken the lead on helping the industry develop open standards, allowing everyone in the ecosystem to innovate and work together to drive AI forward. We utterly reject the notion that one company could have a monopoly on AI or AI innovation," said Forrest Norrod, AMD's executive vice president. The company also previewed Helios, its next-generation rack platform built around the upcoming MI400 GPUs and Venice CPUs. Su said Venice is built on TSMC's 2-nanometer process, features up to 256 high-performance Zen 6 cores, and delivers 70% more compute performance than AMD's current-generation leadership CPUs. "Helios functions like a single, massive compute engine. It connects up to 72 GPUs with 260 terabytes per second of scale-up bandwidth, enabling 2.9 exaflops of FP4 performance," she said, adding that compared to the competition, it supports 50% more HBM4 memory, memory bandwidth, and scale-out bandwidth. AMD's Venice CPUs bring up to 256 cores and higher memory bandwidth, while Vulcano AI NICs support 800G networking and UALink. "Choosing the right CPU gets the most out of your GPU," said Norrod. Helios uses UALink to connect 72 GPUs as a unified system, offering open, vendor-neutral scale-up performance. Describing UALink as a key differentiator, Norrod said one of its most important features is that it's "an open ecosystem" -- a protocol that works across systems regardless of the CPU, accelerator, or switch brand. He added that AMD believes that open interoperability accelerates innovation, protects customer choice, and still delivers leadership performance and efficiency. As AI workloads grow in complexity and scale, AMD says a unified stack is necessary, combining high-performance GPUs, CPUs, and intelligent networking to support multi-agent systems across industries. The currently available solution supports up to 128 Instinct MI350 GPUs per rack with up to 36TB of HBM3E memory. The infrastructure is built on Open Compute Project (OCP) standards and Ultra Ethernet Consortium (UEC) compliance, allowing interoperability with existing infrastructure. OCI will be among the first to adopt the MI355X-based rack-scale platform. "We will be one of the first to provide the MI355X rack-scale infrastructure using the combined power of EPYC, Instinct, and Pensando," said Mahesh Thiagarajan, EVP at OCI. Besides that, the new Helios rack solution, expected in 2026, brings tighter integration and higher throughput. It includes next-gen MI400 GPUs, offering up to 432GB of HBM4 memory and 40 petaflops of FP4 performance.
[7]
AMD Unveils Its Latest Chips, With ChatGPT Maker OpenAI Among Its Customers
AMD (AMD) unveiled its next-generation MI400 chips at its "Advancing AI" event Thursday. The chip isn't expected to launch until 2026, but it already has some high-profile customers, including OpenAI. OpenAI CEO Sam Altman joined AMD CEO Lisa Su onstage Thursday to highlight the ChatGPT developer's partnership with AMD on AI infrastructure and announce that it will make use of the MI400 series. "When you first started telling me about the specs, I was like, there's no way, that just sounds totally crazy," Altman said. "It's gonna be an amazing thing." AMD said it counts Meta (META), xAI, Oracle (ORCL), Microsoft (MSFT), Astera Labs (ALAB), and Marvell Technology (MRVL) among its partners as well. AMD showcased its AI server rack architecture at the event, which will combine MI400 chips into one larger system known as Helios. The company compared it to rival Nvidia's (NVDA) Vera Rubin, also expected in 2026. The event also brought the launch of AMD's Instinct MI350 Series GPUs, which it claims offers four times more computing power than its previous generation. Shares of AMD slid about 2% Thursday, leaving the stock down just under 2% for 2025 so far.
[8]
AMD Unveils AI Server as OpenAI Taps Its Newest Chips
The move comes as the competition between Nvidia and other AI chip firms Advanced Micro Devices CEO Lisa Su on Thursday unveiled a new artificial intelligence server for 2026 that aims to challenge Nvidia's flagship offerings as OpenAI's CEO said the ChatGPT creator would adopt AMD's latest chips. Su took the stage at a developer conference in San Jose, California, called "Advancing AI" to discuss the MI350 series and MI400 series AI chips that she said would compete with Nvidia's Blackwell line of processors The MI400 series of chips will be the basis of a new server called "Helios" that AMD plans to release next year. The move comes as the competition between Nvidia and other AI chip firms has shifted away from selling individual chips to selling servers packed with scores or even hundreds of processors, woven together with networking chips from the same company. The AMD Helios servers will have 72 of AMD's MI400 series chips, making them comparable to Nvidia's current NVL72 servers, AMD executives said. During its keynote presentation, AMD said that many aspects of the Helios servers - such as the networking standards - would be made openly available and shared with competitors such as Intel. The move was a direct swipe at market leader Nvidia, which uses proprietary technology called NVLink to string together its chips but has recently started to license that technology as pressure mounts from rivals. "The future of AI is not going to be built by any one company or in a closed ecosystem. It's going to be shaped by open collaboration across the industry," Su said. Su was joined onstage by OpenAI's Sam Altman. The ChatGPT creator is working with AMD on the firm's MI450 chips to improve their design for AI work. "Our infrastructure ramp-up over the last year, and what we're looking at over the next year, have just been a crazy, crazy thing to watch," Altman said. During her speech, executives from Elon Musk-owned xAI, Meta Platforms and Oracle took to the stage to discuss their respective uses of AMD processors. Crusoe, a cloud provider that specialises in AI, told Reuters it is planning to buy $400 million (roughly Rs. 3,440 crore) of AMD's new chips. AMD's Su reiterated the company's product plans for the next year, which will roughly match the annual release schedule that Nvidia began with its Blackwell chips. AMD shares ended 2.2 percent lower after the company's announcement. Kinngai Chan, an analyst at Summit Insights, said the chips announced on Thursday were not likely to immediately change AMD's competitive position. AMD has struggled to siphon off a portion of the quickly growing market for AI chips from the dominant Nvidia. But the company has made a concerted effort to improve its software and produce a line of chips that rival Nvidia's performance. AMD completed the acquisition of server builder ZT Systems in March. As a result, AMD is expected to launch new complete AI systems, similar to several of the server-rack-sized products Nvidia produces. Santa Clara, California-based AMD has made a series of small acquisitions in recent weeks and has added talent to its chip design and AI software teams. At the event, Su said the company has made 25 strategic investments in the past year that were related to the company's AI plans. Last week, AMD hired the team from chip startup Untether AI. On Wednesday, AMD said it had hired several employees from generative AI startup Lamini, including the co-founder and CEO. AMD's software called ROCm has struggled to gain traction against Nvidia's CUDA, which is seen by some industry insiders as a key part of protecting the company's dominance. When AMD reported earnings in May, Su said that despite increasingly aggressive curbs on AI chip exports to China, AMD still expected strong double-digit growth from AI chips. Β© Thomson Reuters 2025
[9]
AMD chief executive to unveil new AI chips
AMD CEO Lisa Su will unveil the MI400 AI chip series Thursday in San Jose, detailing plans to rival Nvidia with annual releases, improved software, and full AI systems. Recent acquisitions and hires underscore AMD's push for a stronger foothold in the AI chip market.Advanced Micro Devices CEO Lisa Su is expected to take the stage on Thursday at a company event in San Jose, California, to discuss the company's plans for the artificial intelligence chips and systems it designs. AMD has struggled to siphon off a portion of the quickly growing market for artificial intelligence chips from the dominant Nvidia. But the company has made a concerted effort to improve its software and produce a line of chips that rival Nvidia's performance. During Su's speech, which is set to begin at 9:30 am local time (1630 GMT), the CEO is expected to detail the company's forthcoming MI400 series of AI chips, set to launch next year. AMD has said it will match the annual release schedule that Nvidia began with its Blackwell series of chips. Thursday's event, called "Advancing AI," will focus on AMD's data center chips and other hardware. AMD completed the acquisition of server builder ZT Systems in March. As a result, AMD is expected to launch new complete AI systems, similar to several of the server-rack-sized products Nvidia produces. Santa Clara, California-based AMD has made a series of small acquisitions in recent weeks and has added talent to its chip design and AI software teams. Last week, AMD hired the team from chip startup Untether AI. On Wednesday AMD said it had hired several employees from generative AI startup Lamini, including the co-founder and CEO. AMD's software called ROCm has struggled to gain traction against Nvidia's CUDA, which is seen by some industry insiders as a key part of protecting the company's dominance. When AMD reported earnings in May, Su said that despite increasingly aggressive curbs on AI chip exports to China, AMD still expected strong double-digit growth from AI chips.
[10]
AMD unveils AI server as OpenAI taps its newest chips
AMD has unveiled its upcoming MI400-based "Helios" AI server, set for 2026, to rival Nvidia's dominance. CEO Lisa Su stressed open collaboration, with support from OpenAI, Meta, and xAI. Su was joined onstage by OpenAI's Sam Altman, who said his company is using AMD's MI300X and MI450 chips.Advanced Micro Devices CEO Lisa Su on Thursday unveiled a new artificial intelligence server for 2026 that aims to challenge Nvidia's flagship offerings, as OpenAI's CEO said the ChatGPT creator would adopt AMD's latest chips. AMD shares were down about 2% after the company announced the news at a developer conference in San Jose, California, called "Advancing AI." Su took the stage to discuss the MI350 series and MI400 series AI chips that she said would compete with Nvidia's Blackwell line of processors The MI400 series of chips will be the basis of a new server called "Helios" that AMD plans to release next year. The move comes as the competition between Nvidia and other AI chip firms has shifted away from selling individual chips to selling servers packed with scores or even hundreds of processors, woven together with networking chips from the same company. During its keynote presentation, AMD said that many aspects of the Helios servers - such as the networking standards - would be made openly available and shared with competitors such as Intel. The move was a direct swipe at market leader Nvidia, which uses proprietary technology called NVLink to string together its chips but has recently started to license that technology as pressure mounts from rivals. "The future of AI is not going to be built by any one company or in a closed ecosystem. It's going to be shaped by open collaboration across the industry," Su said. Su was joined onstage by OpenAI's Sam Altman, who said his company is using AMD's MI300X and MI450 chips. "Our infrastructure ramp-up over the last year, and what we're looking at over the next year, have just been a crazy, crazy thing to watch," Altman said. During her speech, executives from billionaire Elon Musk-owned xAI, Meta Platforms and Oracle took to the stage to discuss their respective uses of AMD processors. Crusoe, a cloud provider that specializes in AI, told Reuters it is planning to buy $400 million of AMD's new chips. AMD's Su reiterated the company's product plans for the next year, which will roughly match the annual release schedule that Nvidia began with its Blackwell chips. AMD has struggled to siphon off a portion of the quickly growing market for AI chips from the dominant Nvidia. But the company has made a concerted effort to improve its software and produce a line of chips that rival Nvidia's performance. AMD completed the acquisition of server builder ZT Systems in March. As a result, AMD is expected to launch new complete AI systems, similar to several of the server-rack-sized products Nvidia produces. Santa Clara, California-based AMD has made a series of small acquisitions in recent weeks and has added talent to its chip design and AI software teams. At the event, Su said the company has made 25 strategic investments in the past year that were related to the company's AI plans. Last week, AMD hired the team from chip startup Untether AI. On Wednesday, AMD said it had hired several employees from generative AI startup Lamini, including the co-founder and CEO. AMD's software called ROCm has struggled to gain traction against Nvidia's CUDA, which is seen by some industry insiders as a key part of protecting the company's dominance. When AMD reported earnings in May, Su said that despite increasingly aggressive curbs on AI chip exports to China, AMD still expected strong double-digit growth from AI chips.
[11]
AMD Calls OpenAI 'Early Design Partner' For MI450. Sam Altman Is 'Extremely Excited.'
Sam Altman's personal endorsement of AMD's upcoming data center GPU, which CEO Lisa Su says will best Nvidia's fastest AI chips next year, serves as a major boost for the company. Its rival, Nvidia, owes a good deal of the riches it has made over the past few years to OpenAI. AMD Lisa Su called OpenAI a customer and "very early design partner" for the chip designer's Instinct MI450 GPU that she said will usurp Nvidia's fastest AI chips next year. Near the end of her Advancing AI keynote in San Jose, Calif., on Thursday, Su disclosed that the ChatGPT behemoth has given the company "significant feedback on the requirements for next-generation training and inference" with regard to the MI450. [Related: The Biggest AMD Advancing AI News: From MI500 GPU TO ROCm Enterprise AI] She then brought out on stage OpenAI CEO and founder Sam Altman, who said he is "extremely excited for the MI450." "The memory architecture is great for inference. I believe it can be an incredible option for training as well," Altman told Su. "When you first started telling me what you're thinking about for the specs, I was like, there's no way. That just sounds totally crazy. It's too big. But it's really been so exciting to see you all get close to delivery on this. I think it's going to be an amazing thing," he added. Altman's personal endorsement of AMD's upcoming data center GPU, the first to power a server rack designed by AMD, served as a major boost for the company. Its rival, Nvidia, owes a good deal of the riches it has made over the past few years to OpenAI, which built ChatGPT using Nvidia GPUs and helped kick off insatiable demand for such products. AMD also received on-stage endorsements for its Instinct GPUs from executives at Microsoft, Meta, Cohere and Oracle Cloud Infrastructure on Thursday. As AMD revealed on Thursday, the MI400 series will pack 432 GB of HBM4 memory, which it said will give the GPU 50 percent more memory capacity and bandwidth than Nvidia's Vera Rubin platform while offering roughly the same compute performance. Seventy-two of AMD's MI450 GPUs will go into its "Helios" server rack, which Su said the company "designed from the grounds up as a rack-scale solution." "When Helios launches in 2026, we believe it'll set a new benchmark for AI at scale," she said. Altman said OpenAI is facing a substantial need for more computing power due to its shift to reasoning models, which has "put pressure on model efficiency and long, complex rollouts," in part because of the lengthy responses generated by such models. "We need tons of compute, tons of memory, tons of CPUs as well. And our infrastructure ramp over the last year and what we're looking over the next year has just been a crazy, crazy thing to watch," Altman said. Su said AMD has collaborated with OpenAI over the last few years, particularly working together in conjunction with Microsoft Azure, which has been an important cloud partner to both companies. That relationship eventually evolved to OpenAI becoming a design partner for AMD with what is now known MI450 GPU series. "One of the things that really sticks in my mind is when we sat down with your engineers, you were like, 'Whatever you do, just give us lots and lots of flexibility because things change so much. And that framework of working together has been phenomenal," she said.
[12]
AMD's CEO Lisa Su Believes AI Data Center Accelerator Market Will Scale Up to $500 Billion By 2028, Driven By Demand For Inferencing
AMD's CEO has revealed massive optimism about the future of the data center segment, claiming that the demand for AI accelerators will only grow. AMD claims that there isn't enough compute available in the market to process all the evolving use cases of AI, claiming that the markets should anticipate the firm's AI/DC revenue to keep growing. At the Advancing AI keynote, AMD's CEO Lisa Su revealed that the data center accelerator market is growing at a whopping 60% CAGR, and this figure is expected to remain steady over the upcoming years, which puts the valuation of the AI accelerator segment at $500 billion, opening up countless opportunities, not just for AMD, but competitors like NVIDIA as well. AI has a lot more room to grow, and by the looks of it, several new prospects are emerging for Big Tech. The AI accelerator market will grow over time because artificial intelligence isn't just limited to model training now. The technology has adopted multiple use cases, which demand computational power, which AI GPUs drive. AMD's CEO says that AI has scaled beyond data centers and is used in cloud applications, edge AI, and client AI. All of these fields require accelerators to create the necessary computing power. As for which firm will capitalize on the accelerator demand, the competition is stepping up, especially after AMD's recent announcements. AMD has announced that they are specifically focusing on three different strategies to broaden its AI portfolio, notably creating leadership compute engines, an open ecosystem, and full-stack solutions, to ensure that its customers get everything by adopting Team Red's AI stack. AMD launched its latest Instinct MI350 AI lineup on the compute engine side, equipped with a brand-new CDNA 3 architecture based on TSMC's 3nm process node. They come with a massive HBM3E memory stack and feature up to 1400W of TDP with the flagship model, the MI355X. AMD says that they have reached parity with NVIDIA's Blackwell in terms of performance. Similarly, at the software ecosystem side of things, AMD revealed the new ROCm 7 software stack, including enhanced frameworks such as vLLM v1, llm-d, and SGLang, and also focuses on serving various optimizations. Here's what to see with ROCm 7: Team Red is shaping up to show an aggressive approach in the AI segment, and rivaling NVIDIA, which has maintained a stronghold over the market for several years now.
[13]
AMD Challenges Nvidia's AI Dominance With New Helios Server As OpenAI CEO Sam Altman Confirms ChatGPT Will Use Lisa Su-Led Tech Giant's Latest Chips: 'Future Of AI Is Not Going To Be Built By Any One Company' - Intel (NASDAQ:INTC), Advanced Micro Devices (NASDAQ:AMD)
On Thursday, Advanced Micro Devices, Inc. AMD unveiled a new server, signaling a direct challenge to Nvidia Corporation's NVDA. What Happened: At a development conference called "Advancing AI" in San Jose, AMD CEO Lisa Su introduced the Helios AI server, set to launch in 2026. Each Helios unit will contain 72 MI400 chips, directly rivaling Nvidia's NVL72 system. The shift reflects a change in competition among AI chipmakers like Nvidia, moving beyond selling standalone chips to offering complete server systems containing dozens or even hundreds of processors, all integrated with networking components from the same vendor. See Also: Robinhood Stock Is Falling Monday: What's Going On? "The future of AI is not going to be built by any one company or in a closed ecosystem. It's going to be shaped by open collaboration across the industry," Su said. AMD also said that Helios' networking standards would be openly shared with competitors like Intel Corporation INTC. OpenAI CEO Sam Altman joined Su onstage and said that ChatGPT would use AMD's MI450 chips, stating, "Our infrastructure ramp-up over the last year, and what we're looking at over the next year, have just been a crazy, crazy thing to watch." Executives from Meta Platforms Inc. META, xAI, and Oracle Corporation ORCL took the stage to highlight how they're leveraging AMD processors in their operations. Today's Best Finance Deals Why It's Important: Last month, Bank of America analyst Vivek Arya maintained a Buy rating on AMD with a $130 price target, citing the company's gains in server and PC CPU market share, growing AI opportunities and multi-year contracts in the Middle East. While Nvidia and custom chips are expected to lead the AI accelerator market, Arya sees AMD capturing a 3-4% share of the $300-$400 billion market. He highlighted AMD's strategic acquisitions, software improvements and recognition from companies like Oracle and xAI. Arya also forecasted up to $6.6 billion in additional revenue across key segments by 2027. Price Action: AMD shares have declined 1.77% year-to-date and are down 25.89% over the past 12 months. On Thursday, the stock fell 2.18%, closing at $118.50, according to Benzinga Pro. Benzinga's Edge Stock Rankings indicate AMD continues to show strong upward momentum in the short and medium term, but trends downward over the long term. Additional performance insights are available here. Read Next: Rocket Lab, AST SpaceMobile Shares Are Surging Monday: What's Fueling The Move? Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Photo Courtesy: jamesonwu1972 On Shutterstock.com AMDAdvanced Micro Devices Inc$118.20-2.43%Stock Score Locked: Edge Members Only Benzinga Rankings give you vital metrics on any stock - anytime. Unlock RankingsEdge RankingsMomentum26.53Growth97.07Quality75.91Value14.94Price TrendShortMediumLongOverviewINTCIntel Corp$20.65-0.14%METAMeta Platforms Inc$691.89-0.32%NVDANVIDIA Corp$144.501.17%ORCLOracle Corp$199.3713.0%Market News and Data brought to you by Benzinga APIs
[14]
AMD unveils AI server as OpenAI taps its newest chips
SAN JOSE -- Advanced Micro Devices CEO Lisa Su on Thursday unveiled a new artificial intelligence server for 2026 that aims to challenge Nvidia's flagship offerings as OpenAI's CEO said the ChatGPT creator would adopt AMD's latest chips. AMD shares were down about 2% after the company announced the news at a developer conference in San Jose, California, called "Advancing AI." Su took the stage to discuss the MI350 series and MI400 series AI chips that she said would compete with Nvidia's Blackwell line of processors The MI400 series of chips will be the basis of a new server called "Helios" that AMD plans to release next year. The move comes as the competition between Nvidia and other AI chip firms has shifted away from selling individual chips to selling servers packed with scores or even hundreds of processors, woven together with networking chips from the same company. During its keynote presentation, AMD said that many aspects of the Helios servers - such as the networking standards - would be made openly available and shared with competitors such as Intel. The move was a direct swipe at market leader Nvidia, which uses proprietary technology called NVLink to string together its chips but has recently started to license that technology as pressure mounts from rivals. "The future of AI is not going to be built by any one company or in a closed ecosystem. It's going to be shaped by open collaboration across the industry," Su said. Su was joined onstage by OpenAI's Sam Altman, who said his company is using AMD's MI300X and MI450 chips. "Our infrastructure ramp-up over the last year, and what we're looking at over the next year, have just been a crazy, crazy thing to watch," Altman said. During her speech, executives from billionaire Elon Musk-owned xAI, Meta Platforms and Oracle took to the stage to discuss their respective uses of AMD processors. Crusoe, a cloud provider that specializes in AI, told Reuters it is planning to buy $400 million of AMD's new chips. AMD's Su reiterated the company's product plans for the next year, which will roughly match the annual release schedule that Nvidia began with its Blackwell chips. AMD has struggled to siphon off a portion of the quickly growing market for AI chips from the dominant Nvidia. But the company has made a concerted effort to improve its software and produce a line of chips that rival Nvidia's performance. AMD completed the acquisition of server builder ZT Systems in March. As a result, AMD is expected to launch new complete AI systems, similar to several of the server-rack-sized products Nvidia produces. Santa Clara, California-based AMD has made a series of small acquisitions in recent weeks and has added talent to its chip design and AI software teams. At the event, Su said the company had acquired 25 companies in the past year that were related to the company's AI plans. Last week, AMD hired the team from chip startup Untether AI. On Wednesday, AMD said it had hired several employees from generative AI startup Lamini, including the co-founder and CEO. AMD's software called ROCm has struggled to gain traction against Nvidia's CUDA, which is seen by some industry insiders as a key part of protecting the company's dominance. When AMD reported earnings in May, Su said that despite increasingly aggressive curbs on AI chip exports to China, AMD still expected strong double-digit growth from AI chips. ---
[15]
AMD turns to AI startups to inform chip, software design
SAN JOSE -- Advanced Micro Devices has forged close ties to a batch of artificial intelligence startups as part of the company's effort to bolster its software and forge superior chip designs. As AI companies seek alternatives to Nvidia's chips, AMD has begun to expand its plans to build a viable competing line of hardware, acquiring companies such as server maker ZT Systems in its quest to achieve that goal. But to build a successful line of chips also requires a powerful set of software to efficiently run the programs built by AI developers. AMD has acquired several small software companies in recent weeks in a bid to boost its talent, and it has been working to beef up its set of software, broadly known as ROCm. "This will be a very thoughtful, deliberate, multi-generational journey for us," said Vamsi Boppana, senior vice president of AI at AMD. AMD has committed to improve its ROCm and other software, which is a boon to customers such as AI enterprise startup Cohere, as it results in speedy changes and the addition of new features. Cohere is focused on building AI models that are tailored for large businesses versus the foundational AI models that companies like OpenAI and others target. AMD has made important strides in improving its software, Cohere CEO Aidan Gomez said in an interview with Reuters. Changing Cohere's software to run on AMD chips was a process that previously took weeks and now happens in only "days," Gomez said. Gomez declined to disclose exactly how much of Cohere's software relies on AMD chips but called it a "meaningful segment of our compute base" around the world. OpenAI has had significant influence on the design of the forthcoming MI450 series of AI chips, said Forrest Norrod, an executive vice president at AMD. AMD's MI400 series of chips will be the basis for a new server called "Helios" that the company plans to release next year. Nvidia too has engineered whole servers in part because AI computations require hundreds or thousands of chips strung together. OpenAI's Sam Altman appeared on stage at AMD's Thursday event in San Jose, and discussed the partnership between the two companies in broad terms. Norrod said that OpenAI's requests had a big influence on how AMD designed the MI450 series memory architecture and how the hardware can scale up to thousands of chips necessary to build and run AI applications. The ChatGPT creator also influenced what kinds of mathematical operations the chips are optimized for. "(OpenAI) has given us a lot of feedback that, I think, heavily informed our design," Norrod said.
[16]
AMD gains on Nvidia? Lisa Su reveals new chips in heated AI inference race By Investing.com
Investing.com -- Advanced Micro Devices Inc (NASDAQ:AMD) made an aggressive bid for dominance in AI inference at its Advancing AI event Thursday, unveiling new chips that directly challenge NVIDIA Corporation's (NASDAQ:NVDA) supremacy in the data center GPU market. AMD claims its latest Instinct MI355X accelerators surpass Nvidia's most advanced Blackwell GPUs in inference performance while offering a significant cost advantage, a critical selling point as hyperscalers look to scale generative AI services affordably. The MI355X, which has just begun volume shipments, delivers a 35-fold generational leap in inference performance and, according to AMD, up to 40% more tokens-per-dollar compared to Nvidia's flagship chips. That performance boost, coupled with lower power consumption, is designed to help AMD undercut Nvidia's offerings in total cost of ownership at a time when major AI customers are re-evaluating procurement strategies. "What has really changed is the demand for inference has grown significantly," AMD CEO Lisa Su said at the event in San Jose. "It says that we have really strong hardware, which we always knew, but it also shows that the open software frameworks have made tremendous progress." AMD's argument hinges not just on silicon performance, but on architecture and economics. By pairing its GPUs with its own CPUs and networking chips inside open "rack-scale" systems, branded Helios, AMD is building full-stack solutions to rival Nvidia's proprietary end-to-end ecosystem. These systems, launching next year with the MI400 series, were designed to enable hyperscale inference clusters while reducing energy and infrastructure costs. Su highlighted how companies like OpenAI, Meta Platforms Inc (NASDAQ:META), and Microsoft Corporation (NASDAQ:MSFT) are now running inference workloads on AMD chips, with OpenAI CEO Sam Altman confirming a close partnership on infrastructure innovation. "It's gonna be an amazing thing," Altman said during the event. "When you first started telling me about the specs, I was like, there's no way, that just sounds totally crazy." Oracle Corporation (NYSE:ORCL) Cloud Infrastructure intends to offer massive clusters of AMD chips, with plans to deploy up to 131,072 MI355X GPUs, positioning AMD as a scalable alternative to Nvidia's tightly integrated, and often more expensive, solutions. AMD officials emphasized the cost benefits, asserting that customers could achieve double-digit percent savings on power and capital expenditures when compared with Nvidia's GPUs. Despite the positive news, AMD shares were down roughly 2% ahead of market close. Wall Street remains cautious, but AMD's moves suggest it is committed to challenging Nvidia's leadership not only with performance parity, but also with a differentiated value and systems strategy. While Nvidia still commands more than 90% of the data center AI chip market, AMD's targeted push into inference, where workloads demand high efficiency and lower costs, marks a strategic front in the battle for AI dominance. With generative AI models driving a surge in inference demand across enterprises, AMD is betting that performance per dollar will matter more than ever.
[17]
AMD unveils new AI accelerators and rack-scale solutions By Investing.com
SANTA CLARA - Advanced Micro Devices, Inc. (NASDAQ:AMD), a $193.56 billion market cap semiconductor giant with 21.71% revenue growth in the last twelve months, introduced its new Instinct MI350 Series accelerators and previewed its next-generation "Helios" AI rack design at its Advancing AI event on Thursday. According to InvestingPro analysis, AMD maintains its position as a prominent player in the Semiconductors industry, with robust financial health scores and significant growth potential. The company announced that the MI350 Series GPUs deliver a 4x generation-on-generation AI compute increase and a 35x generational leap in inferencing capabilities. The MI355X variant reportedly generates up to 40% more tokens-per-dollar compared to competing solutions. AMD also demonstrated its open-standards rack-scale AI infrastructure, which is already being deployed by Oracle Cloud Infrastructure. The company said this infrastructure, featuring MI350 Series accelerators, 5th Gen EPYC processors, and Pensando Pollara NICs, will be broadly available in the second half of 2025. Looking ahead, AMD previewed its next-generation AI rack called Helios, which will incorporate MI400 Series GPUs expected to deliver up to 10x more performance for inference on Mixture of Experts models compared to the previous generation. The company announced that seven of the 10 largest model builders and AI companies are running production workloads on Instinct accelerators, including Meta, OpenAI, Microsoft, and xAI. With a strong gross profit margin of 53.58% and healthy current ratio of 2.8, AMD appears well-positioned to support its expanding AI initiatives. For deeper insights into AMD's financial metrics and 13 additional ProTips, consider exploring InvestingPro's comprehensive analysis tools. AMD also released ROCm 7, the latest version of its open-source AI software stack, and announced the broad availability of the AMD Developer Cloud for the global developer community. In terms of energy efficiency, AMD reported that the Instinct MI350 Series exceeded the company's five-year goal to improve AI training and high-performance computing node efficiency by 30x, ultimately delivering a 38x improvement. The company set a new 2030 goal to deliver a 20x increase in rack-scale energy efficiency from a 2024 base year. This information is based on a press release statement from AMD. The company's stock, which has shown significant volatility with a beta of 1.99, currently trades at a premium to its InvestingPro Fair Value, reflecting market optimism about its AI initiatives. Investors seeking detailed valuation analysis and expert insights can access AMD's full Pro Research Report, available exclusively on InvestingPro, along with reports for 1,400+ other top US stocks. In other recent news, Advanced Micro Devices (AMD) has unveiled a new lineup of AI chips, the MI350 and MI400 series, aimed at challenging Nvidia's dominance in the AI chip market. This development aligns with AMD's strategy to capture a larger share of the expanding AI sector. The company has also completed its acquisition of ZT Systems, positioning itself to offer comprehensive AI systems similar to Nvidia's offerings. KeyBanc has maintained its Sector Weight rating on AMD stock, adjusting earnings estimates due to anticipated charges, including a significant write-off of the MI308. Cantor Fitzgerald has raised its price target for AMD to $140, citing optimism around the company's AI prospects and upcoming product releases. Similarly, Citi has increased its price target to $120, maintaining a Neutral rating, and anticipates AMD to highlight new customer partnerships at its upcoming AI event. Wells Fargo has reiterated an Overweight rating with a $120 price target, noting AMD's strategic partnership with Sanmina and the focus on rack-scale AI solutions. These recent developments reflect AMD's efforts to strengthen its position in the AI market and its ongoing strategic initiatives.
[18]
AMD turns to AI startups to inform chip, software design
SAN JOSE (Reuters) -Advanced Micro Devices has forged close ties to a batch of artificial intelligence startups as part of the company's effort to bolster its software and forge superior chip designs. As AI companies seek alternatives to Nvidia's chips, AMD has begun to expand its plans to build a viable competing line of hardware, acquiring companies such as server maker ZT Systems in its quest to achieve that goal. But to build a successful line of chips also requires a powerful set of software to efficiently run the programs built by AI developers. AMD has acquired several small software companies in recent weeks in a bid to boost its talent, and it has been working to beef up its set of software, broadly known as ROCm. "This will be a very thoughtful, deliberate, multi-generational journey for us," said Vamsi Boppana, senior vice president of AI at AMD. AMD has committed to improve its ROCm and other software, which is a boon to customers such as AI enterprise startup Cohere, as it results in speedy changes and the addition of new features. Cohere is focused on building AI models that are tailored for large businesses versus the foundational AI models that companies like OpenAI and others target. AMD has made important strides in improving its software, Cohere CEO Aidan Gomez said in an interview with Reuters. Changing Cohere's software to run on AMD chips was a process that previously took weeks and now happens in only "days," Gomez said. Gomez declined to disclose exactly how much of Cohere's software relies on AMD chips but called it a "meaningful segment of our compute base" around the world. OPENAI INFLUENCE OpenAI has had significant influence on the design of the forthcoming MI450 series of AI chips, said Forrest Norrod, an executive vice president at AMD. AMD's MI400 series of chips will be the basis for a new server called "Helios" that the company plans to release next year. Nvidia too has engineered whole servers in part because AI computations require hundreds or thousands of chips strung together. OpenAI's Sam Altman appeared on stage at AMD's Thursday event in San Jose, and discussed the partnership between the two companies in broad terms. Norrod said that OpenAI's requests had a big influence on how AMD designed the MI450 series memory architecture and how the hardware can scale up to thousands of chips necessary to build and run AI applications. The ChatGPT creator also influenced what kinds of mathematical operations the chips are optimized for. "(OpenAI) has given us a lot of feedback that, I think, heavily informed our design," Norrod said. (Reporting by Max A. Cherney in San JoseEditing by Shri Navaratnam)
Share
Copy Link
AMD reveals its new Instinct MI350 and MI400 series AI chips, along with a comprehensive AI roadmap spanning GPUs, networking, software, and rack architectures, in a bid to compete with Nvidia in the rapidly growing AI chip market.
Advanced Micro Devices (AMD) has made a significant leap in the artificial intelligence (AI) chip market, unveiling its next-generation AI chips and a comprehensive roadmap that directly challenges Nvidia's dominance. At its "Advancing AI" event, AMD CEO Lisa Su presented the company's ambitious plans to compete in the rapidly growing AI infrastructure sector 12.
Source: Wccftech
AMD introduced its Instinct MI350 series GPUs, claiming a remarkable 4x improvement in AI performance compared to its previous generation 5. The company also teased the upcoming MI400 series, scheduled for release next year, which will feature up to 432 GB of HBM4 memory and impressive bandwidth capabilities 4.
According to AMD's internal testing, these new chips not only match but potentially surpass Nvidia's Blackwell B200 performance in certain benchmarks. Notably, AMD emphasized a 35x generational improvement in inferencing and claimed to offer 40% more tokens per dollar compared to Nvidia's B200 5.
AMD's strategy extends beyond just chip manufacturing. The company has made significant strides in several key areas:
Software Development: AMD launched version 7 of its open-source ROCm software stack, crucial for AI developers to leverage GPU capabilities 4.
Networking Solutions: The company introduced the Pensando Pollara 400 AI NIC, touted as the industry's first AI-powered network card, and announced the "Vulcano" networking chip for large AI clusters 4.
Rack Architecture: AMD unveiled the Helios rack architecture, set to debut in 2026, which will integrate next-generation EPYC CPUs, Instinct MI400 GPUs, and Vulcano networking chips 45.
Source: TechSpot
AMD has been forging close ties with AI startups and major tech companies to inform its chip and software design. The company reported that seven out of ten of the largest model builders and AI companies, including Meta, OpenAI, Microsoft, and xAI, are using AMD's Instinct accelerators 35.
OpenAI, in particular, has had a significant influence on the design of the forthcoming MI450 series of AI chips. Sam Altman, OpenAI's CEO, appeared at AMD's event to discuss the partnership 1.
Source: Investopedia
AMD is also focusing on energy efficiency, claiming to have exceeded its five-year goal of improving AI training and high-performance computing node efficiency by 38x, surpassing the initial 30x target. The company has set a bold goal to increase rack-scale energy efficiency by 20x compared to 2024 levels by 2030 5.
While AMD's announcements are significant, the company still faces an uphill battle against Nvidia's established market dominance. Nvidia currently holds over 90% of the AI chip market, according to analyst estimates 3.
However, AMD's comprehensive approach, spanning chips, software, and systems, demonstrates its commitment to becoming a serious competitor in the AI infrastructure space. The company expects the total market for AI chips to exceed $500 billion by 2028, presenting a substantial opportunity for growth 3.
As the AI chip war heats up, the industry can expect continued innovation and competition, potentially leading to more efficient and powerful AI systems in the coming years.
Summarized by
Navi
[2]
Google DeepMind has launched Weather Lab, an interactive website featuring AI weather models, including an experimental tropical cyclone model. The new AI system aims to improve cyclone predictions and is being evaluated by the US National Hurricane Center.
8 Sources
Technology
20 hrs ago
8 Sources
Technology
20 hrs ago
Meta's new AI app is facing criticism for its "Discover" feature, which publicly displays users' private conversations with the AI chatbot, often containing sensitive personal information.
6 Sources
Technology
20 hrs ago
6 Sources
Technology
20 hrs ago
A major Google Cloud Platform outage affected numerous AI services and popular platforms, highlighting the vulnerabilities of cloud-dependent systems and raising concerns about the resilience of digital infrastructure.
3 Sources
Technology
4 hrs ago
3 Sources
Technology
4 hrs ago
Harvard University and other libraries are releasing vast collections of public domain books and documents to AI researchers, providing a rich source of cultural and historical data for machine learning models.
6 Sources
Technology
20 hrs ago
6 Sources
Technology
20 hrs ago
Broadcom's strategic focus on custom AI chips and networking solutions has led to significant growth, with the company emerging as a key player in the AI semiconductor market. However, challenges such as market competition and potential demand fluctuations remain.
4 Sources
Technology
20 hrs ago
4 Sources
Technology
20 hrs ago