Curated by THEOUTPOST
On Sat, 5 Oct, 12:04 AM UTC
25 Sources
[1]
AMD Sees Next AI Chip in Mass Production Later This Year
In July, AMD raised its AI chip forecast to $4.5 billion AMD said its vendors will begin to ship its AI chips by Q1 2025 The new chip will speed AI calculations Advanced Micro Devices said on Thursday it plans to start mass production of a new version of its Artificial Intelligence (AI) chip called the MI325X in the fourth quarter of the year, as it seeks to bolster its presence in a market dominated by Nvidia. At an event in San Francisco, AMD CEO Lisa Su said the company plans to release its next-generation MI350 series chips in the second half of 2025. These chips include an increased amount of memory and will boast a new underlying architecture that AMD said will improve performance significantly over the prior MI300X and MI250X chips. The announcements were broadly expected based on AMD disclosures earlier this year. They failed to cheer investors, who sent AMD shares down nearly five percent in afternoon trading. Some analysts attributed the fall to the absence of large new cloud-computing customers for the chips. Shares of rival Nvidia were up 1.5 percent while Intel fell 1.6 percent. Demand for AI processors from major technology firms such as Microsoft and Meta Platforms has been far outpacing supply from Nvidia and AMD, allowing the semiconductor companies to sell as much as they can produce. That has driven a massive rally in chip stocks over the past two years, with AMD's shares up about 30 percent since a recent low in early August. "There are no new customers announced so far," said Summit Insights research analyst Kinngai Chan, adding that the stock had gained ahead of the event in anticipation of "something new." Santa Clara, California-based AMD said vendors such as Super Micro Computer would begin to ship its MI325X AI chip to customers in the first quarter of 2025. The AMD design aims to compete with Nvidia's Blackwell architecture. The MI325X chip uses the same architecture as the already-available MI300X, which AMD launched last year. The new chip includes a new type of memory that AMD said will speed AI calculations. AMD's next-generation AI chips are likely to put further pressure on Intel, which has struggled to deploy a coherent AI chips strategy. Intel has forecast AI chip sales of more than $500 million (roughly Rs. 4,203 crore) in 2024. AMD's Su also said at the event that the company does not currently have plans to use contract chip manufacturers beyond Taiwan's TSMC for advanced manufacturing processes, which are used to produce speedy AI chips. "We would love to use more capacity outside of Taiwan. We are very aggressive in the use of TSMC's Arizona facility," Su said. AMD also unveiled several networking chips that help speed moving data between chips and systems inside data centers. The company announced the availability of a new version of its server central processing unit (CPU) design. The family of chips formerly codenamed Turin includes a version of one of them that is designed to keep the graphics processing units (GPUs) fed with data - which will speed AI processing. The flagship chip boasts nearly 200 processing cores and is priced at $14,813 (roughly Rs. 12.45 lakh). The whole line of processors uses the Zen 5 architecture that offers speed gains of as much as 37 percent for advanced AI data crunching. Beyond the data center chips, AMD announced three new PC chips aimed at laptops, based on the Zen 5 architecture. The new chips are tuned to run AI applications and will be capable of running Microsoft's Copilot+ software. In July, AMD raised its AI chip forecast to $4.5 billion (roughly Rs. 37,834 crore) for the year from its previous target of $4 billion (roughly Rs. 33,630 crore). Demand for its MI300X chips has surged because of the frenzy around building and deploying generative AI products. This year analysts expect AMD to report data center revenue of $12.83 billion (roughly Rs. 10,78,711 crore), according to LSEG estimates. Wall Street expects Nvidia to report data center revenue of $110.36 billion (roughly Rs. 9,27,877 crore). Data center revenue is a proxy for AI chips needed to build and run AI applications. Analysts' rising earnings expectations have kept AMD and Nvidia's valuations in check despite the share surge. Both the companies trade at more than 33 times their 12-month forward earnings estimates, compared with the benchmark S&P 500's 22.3. © Thomson Reuters 2024
[2]
AMD launches AI chip to rival Nvidia's Blackwell
Lisa Su delivers the opening keynote speech at Computex 2024, in Taipei, Taiwan, on June 3, 2024.I-Hwa Cheng / AFP via Getty Images AMD launched a new artificial-intelligence chip on Thursday that is taking direct aim at Nvidia's data center graphics processors, known as GPUs. The Instinct MI325X, as the chip is called, will start production before the end of 2024, AMD said Thursday during an event announcing the new product. If AMD's AI chips are seen by developers and cloud giants as a close substitute for Nvidia's products, it could put pricing pressure on Nvidia, which has enjoyed roughly 75% gross margins while its GPUs have been in high demand over the past year. Advanced generative AI such as OpenAI's ChatGPT requires massive data centers full of GPUs in order to do the necessary processing, which has created demand for more companies to provide AI chips. In the past few years, Nvidia has dominated the majority of the data center GPU market, but AMD is historically in second place. Now, AMD is aiming to take share from its Silicon Valley rival or at least to capture a big chunk of the market, which it says will be worth $500 billion by 2028. "AI demand has actually continued to take off and actually exceed expectations. It's clear that the rate of investment is continuing to grow everywhere," AMD CEO Lisa Su said at the event. AMD didn't reveal new major cloud or internet customers for its Instinct GPUs at the event, but the company has previously disclosed that both Meta and Microsoft buy its AI GPUs and that OpenAI uses them for some applications. The company also did not disclose pricing for the Instinct MI325X, which is typically sold as part of a complete server. With the launch of the MI325X, AMD is accelerating its product schedule to release new chips on an annual schedule to better compete with Nvidia and take advantage of the boom for AI chips. The new AI chip is the successor to the MI300X, which started shipping late last year. AMD's 2025 chip will be called MI350, and its 2026 chip will be called MI400, the company said. The MI325X's rollout will pit it against Nvidia's upcoming Blackwell chips, which Nvidia has said will start shipping in significant quantities early next year. A successful launch for AMD's newest data center GPU could draw interest from investors that are looking for additional companies that are in line to benefit from the AI boom. AMD is only up 20% so far in 2024 while Nvidia's stock is up over 175%. Most industry estimates say Nvidia has over 90% of the market for data center AI chips. AMD stock fell 3% during trading on Thursday. AMD's biggest obstacle in taking market share is that its rival's chips use their own programming language, CUDA, which has become standard among AI developers. That essentially locks developers into Nvidia's ecosystem. In response, AMD this week said that it has been improving its competing software, called ROCm, so that AI developers can more easily switch more of their AI models over to AMD's chips, which it calls accelerators. AMD has framed its AI accelerators as more competitive for use cases where AI models are creating content or making predictions rather than when an AI model is processing terabytes of data to improve. That's partially due to the advanced memory AMD is using on its chip, it said, which allows it to server Meta's Llama AI model faster than some Nvidia chips. "What you see is that MI325 platform delivers up to 40% more inference performance than the H200 on Llama 3.1," said Su, referring to Meta's large-language AI model. While AI accelerators and GPUs have become the most intensely watched part of the semiconductor industry, AMD's core business has been central processors, or CPUs, that lay at the heart of nearly every server in the world. AMD's data center sales during the June quarter more than doubled in the past year to $2.8 billion, with AI chips accounting for only about $1 billion, the company said in July. AMD takes about 34% of total dollars spent on data center CPUs, the company said. That's still less than Intel, which remains the boss of the market with its Xeon line of chips. AMD is aiming to change that with a new line of CPUs, called EPYC 5th Gen, that it also announced on Thursday. Those chips come in a number of different configurations ranging from a low-cost and low-power 8-core chip that costs $527 to 192-core, 500-watt processors intended for supercomputers that cost $14,813 per chip. The new CPUs are particularly good for feeding data into AI workloads, AMD said. Nearly all GPUs require a CPU on the same system in order to boot up the computer. "Today's AI is really about CPU capability, and you see that in data analytics and a lot of those types of applications," Su said.
[3]
AMD unveils powerful new AI chip to challenge Nvidia
AMD CEO Lisa Su on the MI325X: "This is the beginning, not the end of the AI race." On Thursday, AMD announced its new MI325X AI accelerator chip, which is set to roll out to data center customers in the fourth quarter of this year. At an event hosted in San Francisco, the company claimed the new chip offers "industry-leading" performance compared to Nvidia's current H200 GPUs, which are widely used in data centers to power AI applications such as ChatGPT. With its new chip, AMD hopes to narrow the performance gap with Nvidia in the AI processor market. The Santa Clara-based company also revealed plans for its next-generation MI350 chip, which is positioned as a head-to-head competitor of Nvidia's new Blackwell system, with an expected shipping date in the second half of 2025. In an interview with the Financial Times, AMD CEO Lisa Su expressed her ambition for AMD to become the "end-to-end" AI leader over the next decade. "This is the beginning, not the end of the AI race," she told the publication. According to AMD's website, the announced MI325X accelerator contains 153 billion transistors and is built on the CDNA3 GPU architecture using TSMC's 5 nm and 6 nm FinFET lithography processes. The chip includes 19,456 stream processors and 1,216 matrix cores spread across 304 compute units. With a peak engine clock of 2100 MHz, the MI325X delivers up to 2.61 PFLOPs of peak eight-bit precision (FP8) performance. For half-precision (FP16) operations, it reaches 1.3 PFLOPs.
[4]
AMD launches AI chip to rival Nvidia's Blackwell
Lisa Su, chairwoman and CEO of Advanced Micro Devices (AMD), delivers the opening keynote speech at Computex 2024, Taiwan's premier tech expo, in Taipei on June 3, 2024. AMD launched a new artificial-intelligence chip on Thursday that is taking direct aim at Nvidia's data center graphics processors, known as GPUs. The Instinct MI325X, as the chip is called, will start production before the end of 2024, AMD said Thursday during an event announcing the new product. If AMD's AI chips are seen by developers and cloud giants as a close substitute for Nvidia's products, it could put pricing pressure on Nvidia, which has enjoyed roughly 75% gross margins while its GPUs have been in high demand over the past year. Advanced generative AI such as OpenAI's ChatGPT requires massive data centers full of GPUs in order to do the necessary processing, which has created demand for more companies to provide AI chips. In the past few years, Nvidia has dominated the majority of the data center GPU market, but AMD is historically in second place. Now, AMD is aiming to take share from its Silicon Valley rival or at least to capture a big chunk of the market, which it says will be worth $500 billion by 2028. "AI demand has actually continued to take off and actually exceed expectations. It's clear that the rate of investment is continuing to grow everywhere," AMD Lisa Su said at the event. AMD didn't reveal new major cloud or internet customers for its Instinct GPUs at the event, but the company has previously disclosed that both Meta and Microsoft buy its AI GPUs and that OpenAI uses them for some applications. The company also did not disclose pricing for the Instinct MI325X, which is typically sold as part of a complete server. With the launch of the MI325X, AMD is accelerating its product schedule to release new chips on an annual schedule to better compete with Nvidia and take advantage of the boom for AI chips. The new AI chip is the successor to the MI300X, which started shipping late last year. AMD's 2025 chip will be called MI350, and its 2026 chip will be called MI400, the company said. The MI325X's rollout will pit it against Nvidia's upcoming Blackwell chips, which Nvidia has said will start shipping in significant quantities early next year. A successful launch for AMD's newest data center GPU could draw interest from investors that are looking for additional companies that are in line to benefit from the AI boom. AMD is only up 20% so far in 2024 while Nvidia's stock is up over 175%. Most industry estimates say Nvidia has over 90% of the market for data center AI chips. AMD stock fell 3% during trading on Thursday. AMD's biggest obstacle in taking market share is that its rival's chips use their own programming language, CUDA, which has become standard among AI developers. That essentially locks developers into Nvidia's ecosystem. In response, AMD this week said that it has been improving its competing software, called ROCm, so that AI developers can more easily switch more of their AI models over to AMD's chips, which it calls accelerators. AMD has framed its AI accelerators as more competitive for use cases where AI models are creating content or making predictions rather than when an AI model is processing terabytes of data to improve. That's partially due to the advanced memory AMD is using on its chip, it said, which allows it to server Meta's Llama AI model faster than some Nvidia chips. "What you see is that MI325 platform delivers up to 40% more inference performance than the H200 on Llama 3.1, and a lot of people also are doing training," While AI accelerators and GPUs have become the most intensely watched part of the semiconductor industry, AMD's core business has been central processors, or CPUs, that lay at the heart of nearly every server in the world. AMD's data center sales during the June quarter more than doubled in the past year to $2.8 billion, with AI chips accounting for only about $1 billion, the company said in July. AMD takes about 34% of total dollars spent on data center CPUs, the company said. That's still less than Intel, which remains the boss of the market with its Xeon line of chips. AMD is aiming to change that with a new line of CPUs, called EPYC 5th Gen, that it also announced on Thursday. Those chips come in a number of different configurations ranging from a low-cost and low-power 8-core chip that costs $527 to 192-core, 500-watt processors intended for supercomputers that cost $14,813 per chip. The new CPUs are particularly good for feeding data into AI workloads, AMD said. Nearly all GPUs require a CPU on the same system in order to boot up the computer. "Today's AI is really about CPU capability, and you see that in data analytics and a lot of those types of applications," Su said.
[5]
AMD rolls out new AI chip to rival Nvidia
AMD's chief executive believes the chipmaker is closing the performance gap with Nvidia's market-leading artificial intelligence processors, as it unveiled new products targeting a market worth hundreds of billions of dollars. On Thursday the Silicon Valley-based group announced that its MI325X chip will roll out to customers in the fourth quarter of this year, saying it offers "industry-leading" performance compared to Nvidia's current generation of H200 AI chips. AMD's next-generation MI350 chip, which aims to compete with Nvidia's new Blackwell system, is on track to ship in the second half of 2025. The US chipmaker has returned from the verge of bankruptcy a decade ago, when Lisa Su took over as chief executive, to emerge as the leading challenger to Nvidia's grip on the infrastructure powering generative AI. Su said her aim is for the company to become the "end-to-end AI leader" over the next 10 years. "You have to be extremely ambitious," Su told the Financial Times. "This is the beginning, not the end of the AI race." It comes as Nvidia's customers are expected to start deploying Blackwell in the current quarter, with Microsoft this week saying it had become the first cloud provider to offer the latest GB200 chips to its customers. While the so-called 'hyperscalers' -- Microsoft, Google and Amazon -- are also building their own in-house AI chips, AMD has become Nvidia's closest competitor in the race to offer off-the-shelf AI chips. It remains a distant second, however. AMD's projected $4.5bn in AI chip sales for 2024 is small compared to the $26.3bn in AI data centre chip sales that Nvidia made in the quarter to the end of July alone. But Su is confident that demand will only grow over the coming years. The company has predicted the total addressable market for AI chips will reach $400bn by 2027. "When we first started, that was viewed as a really big number," Su said. "And I think people are moving towards our big number because of the tremendous demand there is for AI infrastructure." Chips are just one part of the infrastructure needed to build cutting-edge AI systems. AMD on Thursday also announced new networking technology and upgrades to its ROCm software toolkit, all of it aimed at offering AI infrastructure quickly and at scale. "One of the things that we are really putting together is the end-to-end infrastructure for the data centre," said Su. "People want a large cluster [of chips in a server] so you can train the largest language models." Su, who has a PhD in electrical engineering from Massachusetts Institute of Technology, worked at Texas Instruments, IBM and Freescale Semiconductor before joining AMD in 2012 as a senior vice-president. When she took over as chief executive in 2014, AMD's shares were languishing at around $4, with some analysts predicting it would be bankrupt in a few years as it struggled to compete with Intel. Today, AMD has cornered a strong share of the server chip market and leapfrogged Intel on AI as it diversifies from its traditional PC business. AMD's shares closed at $171 on Wednesday ahead of the announcement, giving it a market capitalisation of around $275bn -- almost triple that of Intel. Su sees AI as the primary driver of AMD's next era of growth, and is seeking to land the same customers as Nvidia. "People are really open to trying different architectures and seeing what fits their workload the best," Su said. So far, both Microsoft and Meta have adopted AMD's current generation of MI300 AI graphics processing units (GPUs). Amazon, which is already a customer for AMD's server CPUs, is likely to follow, Su said: "It's a 'point in time' conversation." AMD's approach parallels what Nvidia is doing with Blackwell, where it aims to sell not individual chips, but whole server racks made up of multiple chips, combined with Nvidia's own networking equipment. To catch up, AMD has pursued an aggressive investment and acquisition strategy, including the recent announcement of its $4.9bn acquisition of ZT Systems, which builds servers for the small group of AI hyperscalers. In terms of potential regulatory reviews of the deal, Su said that "our current expectation is US-EU [checks], and there are a few other jurisdictions as well, but we don't pass thresholds for China at the moment".
[6]
AMD Launches New AI Chips To Meet 'Insatiable' AI Demand
Advanced Micro Devices (AMD) unveiled its next generation of artificial intelligence (AI) chips at its Advancing AI event on Thursday. The chipmaker announced its fifth-generation EPYC server processor, which CEO Lisa Su said delivers dramatically more power than its predecessor, and new Ryzen AI PRO processors for enterprise AI. The company also showed off its latest Instinct MI300X series GPU, the MI325X, with production set begin in the fourth quarter and availability for AMD's partners in the first quarter of 2025. The new products come as demand for AI chips grows, and could help AMD expand its market share. AMD said its market share for its EPYC server processors hit a new high of 34% at the end of the second quarter, and could continue to climb with its next generation of products. Analysts suggested ahead of the event that it could be a "catch-up" catalyst for AMD to expand its slice of the market, after a strong start to the year with data center sales hitting a record high in the second quarter. Su also highlighted the chipmaker's partnerships with Oracle (ORCL), Alphabet's (GOOGL) Google, Meta (META), and Microsoft (MSFT) at the event. Kevin Salvadori, Meta's VP for infrastructure and engineering, said that the company's work on generative AI has made use of more than 1.5 million of AMD's EPYC CPUs and utilized Instinct GPUs for projects such as developing its Llama large language model. Oracle SVP Karan Batta added that AMD's Instinct GPUs have delivered strong performance for the company, and Google Cloud VP Amin Vahdat called the demand for AI computing power "insatiable." The event did little to benefit AMD's stock price, however. Shares of the company were down nearly 5% at $163.11 after the event. Still, they've gained more than 10% since the start of the year.
[7]
AMD Says New AI Chips Will Be Out Soon and Can Outperform Nvidia
(Bloomberg) -- Advanced Micro Devices Inc., looking to catch up with Nvidia Corp. in the lucrative market for artificial intelligence processors, said that its latest chips are rolling out to data centers and will exceed some of the capabilities of its rival. Computer systems based on AMD's MI325X processors will be available soon and have an edge over machines running Nvidia's H100, Chief Executive Officer Lisa Su said at a company event in San Francisco Thursday. The MI325X's use of a new type of memory chip will give it better performance at running AI software -- a process known as inference -- she said. The Santa Clara, California-based company is trying to crack Nvidia's dominance in so-called AI accelerators -- chips that have become essential to the development of artificial intelligence systems. Like Nvidia, AMD has committed to bringing out new accelerators every year, stepping up its pace of innovation. Still, AMD has a long way to go to match Nvidia. And investors were underwhelmed by Thursday's presentation. The stock fell more than 2% to $166.08 as of 12:43 p.m. in New York. Under Su, who just marked her 10th anniversary in the top job at AMD, the company has eclipsed its longtime nemesis Intel Corp. in market valuation. But both companies were caught off guard by how ferociously the industry embraced AI accelerators. Of the two, AMD has responded far more quickly and established itself as the closest rival to Nvidia. AMD, which is expected to report quarterly results in the coming weeks, has set a target of $4.5 billion of revenue from the new type of chips for this year, a rapid increase. Su has said the overall market for such chips will hit $400 billion in 2027. On Thursday, she said that the company expects that number to reach $500 billion in 2028. At the event, Su also said the company is releasing a new line of server processors based on its "Turin" technology, making a fresh push into a market once dominated by Intel. Computers are going on sale with AMD's fifth-generation EPYC central processing units, or CPUs, she said. The chips have as many as 192 processor cores and can outperform the latest Intel products, she said. The company said that it now has 34% of the market for this category of chips when measured by revenue. Though Intel still dominates the segment, it once had a 99% share.
[8]
AMD is going after Nvidia with new AI chips
Advanced Micro Devices (AMD) announced new artificial intelligence chips to boost its rivalry with other chipmakers, including Nvidia (NVDA). The chipmaker launched its new Instinct MI325X accelerators and Ryzen AI PRO 300 series processors along with other leading-edge computing chips on Thursday at its Advancing AI conference in San Francisco. "The data center and AI represent significant growth opportunities for AMD, and we are building strong momentum for our EPYC and AMD Instinct processors across a growing set of customers," AMD chief executive Lisa Su said in a statement. With its new chips, AMD is "delivering leadership compute to power our customers' most important and demanding workloads," she added. By 2028, AMD sees the market for data center AI accelerators reaching $500 billion, Su said, adding that the chipmaker is "committed to delivering open innovation at scale through our expanded silicon, software, network and cluster-level solutions." At Computex in June, Su announced AMD's next-generation MI325X AI accelerator with improved "performance and memory capabilities for the most demanding AI" processing, which she said would be available in the fourth quarter. She also unveiled the Ryzen AI PRO 300 series which will be used for AI laptops, including Microsoft's (MSFT) Copilot+ PCs. Like rival Nvidia, Su said in June that AMD has "expanded our roadmap so it's now on an annual cadence, that means a new product family every year." AMD expects to launch its next-generation Instinct MI350 series accelerators in the second half of 2025, the chipmaker said. Its Instinct MI400 Series accelerators are planned for 2026. The company's Instinct MI300X accelerators were launched in December, and are used by some of the world's leading AI models from OpenAI, Meta (META), and open-source models on Hugging Face.
[9]
AMD debuts new processors and accelerators for AI era By Investing.com
SAN FRANCISCO - Advanced Micro Devices, Inc. (NASDAQ:AMD) has introduced a suite of new computing solutions aimed at advancing artificial intelligence (AI) applications, including the 5th Gen AMD EPYC server CPUs, AMD Instinct MI325X accelerators, and AMD Ryzen AI PRO processors. The announcement was made on October 10, 2024, underscoring AMD's commitment to the AI computing era. The new AMD EPYC 9005 Series processors, based on the Zen 5 architecture, feature up to 192 cores and are available from today in platforms from leading OEMs and ODMs. AMD has also launched the AMD Instinct MI325X, which is designed to deliver high performance and memory capabilities for demanding AI workloads. Further, AMD revealed upcoming AMD Instinct MI350 series accelerators, expected in the second half of 2025, and AMD Instinct MI400 Series accelerators, planned for 2026. AMD's ROCm open source AI software has also seen improvements, with the company reporting that over one million AI models now run seamlessly on AMD Instinct, a threefold increase since the MI300X series launch. The company's expanded networking portfolio includes the AMD Pensando Salina DPU and AMD Pensando Pollara 400 NIC (NASDAQ:EGOV), aimed at enhancing AI infrastructure performance and efficiency. AMD's Chair and CEO, Dr. Lisa Su, highlighted the growth opportunities for AMD in the data center and AI market, which is expected to reach $500 billion by 2028. She emphasized AMD's momentum with EPYC and AMD Instinct processors across a growing customer base. In terms of enterprise AI, AMD has unveiled the Ryzen AI PRO 300 Series processors, which power the new Microsoft (NASDAQ:MSFT) Copilot+ laptops designed for enterprise use. These processors offer advanced security features and manageability for business users. The press release also noted collaborations with industry leaders such as Google (NASDAQ:GOOGL) Cloud, Microsoft, and Meta (NASDAQ:META), who are utilizing AMD's AI solutions. For instance, Google will offer EPYC 9005 Series-based VMs in early 2025, and Microsoft's CEO Satya Nadella discussed the use of MI300X accelerators in Microsoft Azure. AMD's advancements in AI were presented alongside partners like Databricks and Oracle (NYSE:ORCL) Cloud Infrastructure, showcasing the seamless integration and performance enhancements of AMD hardware and software in various applications. The information in this article is based on a press release statement from AMD. In other recent news, Advanced Micro Devices (AMD) has made significant strides in AI technology with the release of its 5th Gen AMD EPYC processors and the unveiling of its third-generation Ryzen AI PRO 300 Series mobile processors. These processors are designed to enhance performance and efficiency for various workloads, with the top-end 192-core EPYC 9005 Series processor delivering up to 2.7 times the performance compared to competing products. AMD also introduced a suite of products aimed at enhancing AI infrastructure performance, including the AMD Instinct MI325X accelerators, expected to start shipping in late 2024. In addition, AMD has announced a strategic collaboration with Oracle Cloud Infrastructure, with AMD's Instinct MI300X accelerators set to power OCI's new AI supercluster. Analysts from Cantor Fitzgerald and Goldman Sachs (NYSE:GS) maintained positive ratings on AMD, citing these developments. AMD's Q2 revenues surpassed Street consensus, reaching $5.835 billion, with its data center segment showing record revenue growth of 115% to $2.8 billion. However, despite AMD's strides in the AI chip market, it is not expected to significantly affect Nvidia's data center revenue, as the demand for AI chips currently surpasses supply. These recent developments underscore AMD's commitment to providing high-performance computing solutions for AI applications. AMD's recent announcements in AI computing solutions align well with its market position and financial performance. According to InvestingPro data, AMD boasts a substantial market capitalization of $265.33 billion, reflecting investor confidence in its growth potential. This is particularly relevant given the company's focus on the expanding AI market, which AMD's CEO projects to reach $500 billion by 2028. InvestingPro Tips highlight AMD as a "prominent player in the Semiconductors & Semiconductor Equipment industry," which is evident from its strategic product launches and partnerships with tech giants like Google, Microsoft, and Meta. The company's revenue of $23.28 billion in the last twelve months, with a 6.4% growth rate, underscores its strong market presence and ability to capitalize on emerging technologies like AI. Another InvestingPro Tip notes that AMD's "net income is expected to grow this year," which aligns with the company's push into high-value AI computing solutions. This growth expectation is further supported by AMD's gross profit margin of 51.42%, indicating efficient operations and potential for increased profitability as it expands its AI offerings. It's worth noting that AMD's stock has shown a strong performance, with a 56.88% return over the past year. This robust growth reflects investor optimism about AMD's strategic direction, particularly its focus on AI and data center markets. For readers interested in a more comprehensive analysis, InvestingPro offers 15 additional tips for AMD, providing deeper insights into the company's financial health and market position.
[10]
AMD likely to launch new AI chips at San Francisco data center event
SAN FRANCISCO, Oct 10 (Reuters) - Advanced Micro Devices (AMD.O), opens new tab is expected to announce several new artificial intelligence processors and other chips at an event in San Francisco on Thursday, as it grows as a supplier of AI chips in a market dominated by Nvidia (NVDA.O), opens new tab. AMD will likely detail its MI325X chip and the next generation MI350 chip that it promised for this year and next year respectively when it unveiled them at the Computex treade show in Taiwan in June. Advertisement · Scroll to continue The MI350 series features increased computing horsepower and memory, according to the company's presentation in June. The AMD design aims to compete with Nvidia's Blackwell architecture. AMD is also likely to announce new server central processing units (CPUs) and PC chips that feature more AI computing horsepower. The current MI300X AI chip launched late last year and AMD has quickly ramped up production to meet demand. Advertisement · Scroll to continue In July, the company raised its AI chip forecast to $4.5 billion for the year from its previous target of $4 billion. Demand for its MI300X chips has surged because of the frenzy around building and deploying generative AI products. AMD's launch on Thursday is unlikely to impact Nvidia's data center revenue as the demand for such chips vastly exceeds the availability. This year analysts expect AMD to report data center revenue of $12.83 billion, according to LSEG estimates. Wall Street expects Nvidia to report data center revenue of $110.36 billion. Data center revenue is a proxy for AI chips needed to build and run AI applications. Max Cherney in San Francisco; Editing by Sonali Paul Our Standards: The Thomson Reuters Trust Principles., opens new tab
[11]
AMD likely to launch new AI chips at San Francisco data center event
SAN FRANCISCO (Reuters) - Advanced Micro Devices is expected to announce several new artificial intelligence processors and other chips at an event in San Francisco on Thursday, as it grows as a supplier of AI chips in a market dominated by Nvidia. AMD will likely detail its MI325X chip and the next generation MI350 chip that it promised for this year and next year respectively when it unveiled them at the Computex treade show in Taiwan in June. The MI350 series features increased computing horsepower and memory, according to the company's presentation in June. The AMD design aims to compete with Nvidia's Blackwell architecture. AMD is also likely to announce new server central processing units (CPUs) and PC chips that feature more AI computing horsepower. The current MI300X AI chip launched late last year and AMD has quickly ramped up production to meet demand. In July, the company raised its AI chip forecast to $4.5 billion for the year from its previous target of $4 billion. Demand for its MI300X chips has surged because of the frenzy around building and deploying generative AI products. AMD's launch on Thursday is unlikely to impact Nvidia's data center revenue as the demand for such chips vastly exceeds the availability. This year analysts expect AMD to report data center revenue of $12.83 billion, according to LSEG estimates. Wall Street expects Nvidia to report data center revenue of $110.36 billion. Data center revenue is a proxy for AI chips needed to build and run AI applications. (Max Cherney in San Francisco; Editing by Sonali Paul)
[12]
Lisa Su reveals AMD's next-gen AI hardware at Advancing AI 2024
At AMD's Advancing AI event, CEO Lisa Su took the stage to announce a series of innovations aimed at AI customers. From the latest 5th generation EPYC processors to next-gen Instinct accelerators, AMD is doubling down on high-performance hardware for AI workloads. These new technologies promise to boost AI processing power and streamline workloads for enterprises and cloud computing. Let's break down the key announcements from the Advancing AI event. Kicking off the event, Lisa Su introduced AMD's 5th generation EPYC portfolio, built around the all-new Zen 5 core. "We designed Zen 5 to be the best in server workloads," Su explained, highlighting its 177% increase in IPC over Zen 4. The new processor features up to 192 cores and 384 threads, pushing the limits of server performance. One of the standout points was the flexibility these chips offer. Su noted, "We thought about it from the architectural standpoint -- how do we build the industry's broadest portfolio of CPUs that covers both cloud and enterprise workloads?" This balance of performance and versatility is aimed at handling everything from AI head nodes to demanding enterprise software. The event also saw the introduction of AMD's new Turion chips, specifically optimized for different types of workloads. Su revealed two key versions: a 128-core version designed for scale-up enterprise applications, and a 192-core version aimed at scale-out cloud computing. Both are built for maximum performance per core, crucial for enterprise workloads where software is often licensed per core. "The 192-core version is really optimized for cloud," Su explained, emphasizing that these chips will give cloud providers the compute density they need. AMD also compared their new EPYC chips to the competition, showing that 5th Gen EPYC delivers up to 2.7 times more performance than the leading alternatives. Turning to AI acceleration, Su announced the AMD Instinct MI325X, the company's latest AI-focused GPU. "We lead the industry with 256 gigabytes of ultra-fast HBM3E memory and six terabytes per second of bandwidth," Su said. The MI325X is built to handle demanding AI tasks such as generative AI, boasting 20-40% better inference performance and latency improvements over previous models. In addition to memory and performance boosts, AMD designed the MI325X with ease of deployment in mind. "We kept a common infrastructure," Su mentioned, allowing for seamless integration with existing systems. This will make it easier for AI customers to adopt the technology without overhauling their platforms. The event also provided a glimpse into AMD's future with the MI350 series. Scheduled for launch in the second half of 2025, the MI350 introduces the new CDNA 4 architecture and offers a staggering 288 GB of HBM3E memory. According to Su, CDNA 4 will bring a "35 times generational increase in AI performance compared to CDNA 3." This new architecture is designed to handle larger AI models with greater efficiency, and its backward compatibility with previous Instinct models ensures a smooth transition for customers. AMD's commitment to optimizing AI performance extends beyond hardware, with Su announcing ROCm 6.2, the latest update to AMD's AI software stack. The new release delivers 2.4 times the performance for key AI inference workloads and 1.8 times better performance for AI training tasks. These improvements come from advancements in algorithms, graph optimizations, and improved compute libraries. "Our latest release focuses on maximizing performance across both proprietary and public models," Su explained, signaling AMD's efforts to remain competitive in the AI software space as well.
[13]
AMD Says New AI Chips Will Be Out Soon and Can Outperform Nvidia
Advanced Micro Devices Inc., looking to catch up with Nvidia Corp. in the lucrative market for artificial intelligence processors, said that its latest chips are rolling out to data centers and will exceed some of the capabilities of its rival. Computer systems based on AMD's MI325X processors will be available soon and have an edge over machines running Nvidia's H100, Chief Executive Officer Lisa Su said at a company event in San Francisco Thursday. The MI325X's use of a new type of memory chip will give it better performance at running AI software -- a process known as inference -- she said.
[14]
AMD May Unveil New AI Chips Today
Advanced Micro Devices is expected to announce several new artificial intelligence processors and other chips at an event in San Francisco on Thursday, as it grows as a supplier of AI chips in a market dominated by Nvidia. AMD will likely detail its MI325X chip and the next generation MI350 chip that it promised for this year and next year respectively when it unveiled them at the Computex trade show in Taiwan in June. The MI350 series features increased computing horsepower and memory, according to the company's presentation in June. The AMD design aims to compete with Nvidia's Blackwell architecture. AMD is also likely to announce new server central processing units (CPUs) and PC chips that feature more AI computing horsepower.
[15]
AMD To Ship "Half a Million" Instinct MI300X AI Accelerators In 2024
AMD looks to make a stronghold over the AI markets moving ahead, as Team Red is now on track to ship out 500,000 of their Instinct MI300X AI accelerators this year. Team Red is positioning itself in the AI markets to become much more competitive compared to where the firm was in the past few quarters. AMD implemented radical business changes, such as unifying its data center and consumer GPU architectures and scaling up its AI business by collaborating with the likes of Samsung, Microsoft, and Oracle. While the firm is still significantly behind the likes of NVIDIA, AMD's trajectory is clearly bullish as KeyBanc's analyst John Vinh (via Benzinga) claims that the firm is set to ship out half a million units of their flagship AI accelerators this year alone. AMD's data center business hasn't been booming lately, given that the firm has been a victim of a monopolized market, but Team Red is still giving tough competition to alternatives out there. Of all the product offerings, AMD's Instinct MI300X has been a standout option for clients in the industry, mainly since the accelerator is equipped with immense capabilities, and the performance per dollar it brings onboard is much higher compared to alternatives within the same lineup. Several companies, such as Lenovo, have appreciated the Instinct MI300X AI accelerator in terms of its market contribution, which has ultimately translated into more business for AMD. Apart from this, the analyst also claims that the upcoming "Advancing AI" event is set to reshape AMD's AI business, since the firm is expected to unveil new architectures, and showcase its plans for the future, which will likely increase the competition in the industry. AMD will unveil their mid-tier Instinct MI325X AI GPU, along with the new range of 5th Gen EPYC CPUs and much more, so it is an occasion that shouldn't be overlooked at all. With the Instinct MI300X AI accelerators bringing in business for AMD, it will be interesting to see how the firm's next-gen AI lineup manages to attract customers from the industry, given that AMD's CEO Lisa Su believes that the AI supercycle has just started and that there is still room for more players to capitalize on the hype. While AMD still has a long way to catch up with its arch-rival NVIDIA, the way Team Red has approached, the markets show that they are growing sustainably, creating a business framework that would have a successful implementation moving into the future.
[16]
AMD Launches 5th Gen EPYC, Instinct MI325X & More at Advancing AI 2024
"The data center and AI represent significant growth opportunities for AMD, and we are building strong momentum for our EPYC and AMD Instinct processors across a growing set of customers," said AMD Chair and CEO Dr. Lisa Su. "With our new EPYC CPUs, AMD Instinct GPUs and Pensando DPUs we are delivering leadership compute to power our customers' most important and demanding workloads. Looking ahead, we see the data center AI accelerator market growing to $500 billion by 2028. We are committed to delivering open innovation at scale through our expanded silicon, software, network and cluster-level solutions." AMD announced a broad portfolio of data center solutions for AI, enterprise, cloud and mixed workloads: New AMD EPYC 9005 Series processors deliver record-breaking performance to enable optimized compute solutions for diverse data center needs. Built on the latest "Zen 5" architecture, the lineup offers up to 192 cores and will be available in a wide range of platforms from leading OEMs and ODMs starting today. AMD continues executing its annual cadence of AI accelerators with the launch of AMD Instinct MI325X, delivering leadership performance and memory capabilities for the most demanding AI workloads. AMD also shared new details on next-gen AMD Instinct MI350 series accelerators expected to launch in the second half of 2025, extending AMD Instinct leadership memory capacity and generative AI performance. AMD has made significant progress developing the AMD Instinct MI400 Series accelerators based on the AMD CDNA Next architecture, planned to be available in 2026. AMD has continuously improved its AMD ROCm software stack, doubling AMD Instinct MI300X accelerator inferencing and training performance across a wide range of the most popular AI models. Today, over one million models run seamlessly out of the box on AMD Instinct, triple the number available when MI300X launched, with day-zero support for the most widely used models. AMD also expanded its high performance networking portfolio to address evolving system networking requirements for AI infrastructure, maximizing CPU and GPU performance to deliver performance, scalability and efficiency across the entire system. The AMD Pensando Salina DPU delivers a high performance front-end network for AI systems, while the AMD Pensando Pollara 400, the first Ultra Ethernet Consortium ready NIC, reduces the complexity of performance tuning and helps improve time to production. AMD partners detailed how they leverage AMD data center solutions to drive leadership generative AI capabilities, deliver cloud infrastructure used by millions of people daily and power on-prem and hybrid data centers for leading enterprises: Since launching in December 2023, AMD Instinct MI300X accelerators have been deployed at scale by leading cloud, OEM and ODM partners and are serving millions of users daily on popular AI models, including OpenAI's ChatGPT, Meta Llama and over one million open source models on the Hugging Face platform. Google highlighted how AMD EPYC processors power a wide range of instances for AI, high performance, general purpose and confidential computing, including their AI Hypercomputer, a supercomputing architecture designed to maximize AI ROI. Google also announced EPYC 9005 Series-based VMs will be available in early 2025. Oracle Cloud Infrastructure shared how it leverages AMD EPYC CPUs, AMD Instinct accelerators and Pensando DPUs to deliver fast, energy efficient compute and networking infrastructure for customers like Uber, Red Bull Powertrains, PayPal and Fireworks AI. OCI announced the new E6 compute platform powered by EPYC 9005 processors. Databricks highlighted how its models and workflows run seamlessly on AMD Instinct and ROCm and disclosed that their testing shows the large memory capacity and compute capabilities of AMD Instinct MI300X GPUs help deliver an over 50% increase in performance on Llama and Databricks proprietary models. Microsoft CEO Satya Nadella highlighted Microsoft's longstanding collaboration and co-innovation with AMD across its product offerings and infrastructure, with MI300X delivering strong performance on Microsoft Azure and GPT workloads. Nadella and Su also discussed the companies' deep partnership on the AMD Instinct roadmap and how Microsoft is planning to leverage future generations of AMD Instinct accelerators including MI350 series and beyond to deliver leadership performance-per-dollar-per-watt for AI applications. Meta detailed how AMD EPYC CPUs and AMD Instinct accelerators power its compute infrastructure across AI deployments and services, with MI300X serving all live traffic on Llama 405B. Meta is also partnering with AMD to optimize AI performance from silicon, systems, and networking to software and applications. Leading OEMs Dell, HPE, Lenovo and Supermicro are expanding on their highly performant, energy efficient AMD EPYC processor-based lineups with new platforms designed to modernize data centers for the AI era.
[17]
AMD launches new AI mobile processors for businesses By Investing.com
SAN FRANCISCO - Advanced Micro Devices, Inc. (NASDAQ:AMD) unveiled its third-generation Ryzen AI PRO 300 Series mobile processors today, boasting significant AI performance improvements and extended battery life for business laptops. The processors, designed with AMD's Zen 5 architecture and built on a 4nm process, aim to enhance productivity for business users with features such as live captioning and language translation during conference calls. The Ryzen AI PRO 300 Series processors claim up to three times the AI performance of their predecessors, with the Ryzen AI 9 HX PRO 375 model offering up to 40% higher performance and up to 14% faster productivity performance than competing products from Intel (NASDAQ:INTC). These processors also feature AMD's XDNA 2 architecture, which powers the integrated Neural Processing Unit (NPU) to deliver over 50 NPU TOPS of AI processing power, surpassing Microsoft (NASDAQ:MSFT)'s Copilot+ AI PC requirements. AMD's announcement highlighted the processors' security and manageability features, part of AMD PRO Technologies, which are designed to streamline IT operations. New security features include Cloud Bare Metal Recovery, Supply Chain Security, and Watch Dog Timer, which leverage the integrated NPU for AI-based security workloads. The company's commercial portfolio expansion is expected to include over 100 Ryzen AI PRO PCs launching through 2025. The new processors are anticipated to hit the market later this year in laptops from OEM partners, integrating the latest Windows 11 features for enhanced productivity, efficiency, and security. Jack Huynh, senior vice president and general manager at AMD, expressed excitement about the new Ryzen AI PRO 300 Series, touting its AI processing capabilities and compatibility with essential applications for business users. This news is based on a press release statement from AMD. The company's forward-looking statements involve risks and uncertainties, and actual product performance and availability may differ from current expectations. In other recent news, Advanced Micro Devices, Inc. (AMD) has unveiled a range of new products aimed at enhancing AI infrastructure performance, including the AMD Instinct MI325X accelerators and networking components. This move is in line with the growing demands of AI applications and data centers. AMD's new offerings, such as the AMD Pensando Salina DPU and Pollara 400 NIC (NASDAQ:EGOV), are expected to be available in the first half of 2025. In addition, AMD is working with Oracle (NYSE:ORCL) Cloud Infrastructure to power its latest AI supercluster with AMD's Instinct MI300X accelerators. This collaboration emphasizes AMD's expanding presence in the cloud computing sector, particularly for AI-intensive applications. On the financial front, AMD's Q2 revenues reached $5.835 billion, surpassing Street consensus by $110 million, with its data center segment showing record revenue growth of 115% to $2.8 billion. Analysts from Cantor Fitzgerald maintained an Overweight rating on AMD shares, while Goldman Sachs (NYSE:GS) has kept its Buy rating, reflecting confidence in AMD's strategic initiatives and market position. These recent developments underscore AMD's ongoing commitment to innovation in high-performance computing and AI technology. AMD's latest announcement of its third-generation Ryzen AI PRO 300 Series mobile processors aligns with the company's strong market position and growth prospects. According to InvestingPro data, AMD boasts a substantial market capitalization of $266.13 billion, reflecting investor confidence in its future. The company's focus on AI-enhanced processors is particularly noteworthy given its financial performance. InvestingPro Tips indicate that AMD's net income is expected to grow this year, and analysts predict the company will be profitable. This positive outlook is supported by AMD's revenue of $23.28 billion over the last twelve months, with a revenue growth of 6.4% during the same period. AMD's innovation in the AI space could contribute to its high valuation multiples. The company is trading at a P/E ratio of 196.81, which suggests investors are pricing in significant future growth potential, likely driven by advancements like the new Ryzen AI PRO series. It's worth noting that AMD is a prominent player in the Semiconductors & Semiconductor Equipment industry, and its stock has shown a strong return of 56.88% over the last year. This performance underscores the market's positive reception of AMD's strategic direction in AI and high-performance computing. For investors seeking more comprehensive analysis, InvestingPro offers additional insights with 15 more tips available for AMD. These tips could provide valuable context for understanding AMD's market position and future prospects in the competitive semiconductor landscape.
[18]
Nvidia's Blackwell Chip Faces AMD's MI350 Challenge In 2025: CEO Lisa Su Says, 'Beginning, Not The End Of The AI Race' - NVIDIA (NASDAQ:NVDA), Advanced Micro Devices (NASDAQ:AMD)
To compete with Nvidia Corporation's NVDA upcoming Blackwell system, Advanced Micro Devices, Inc.'s AMD next-generation chip is scheduled to ship in the second half of 2025. What Happened: On Thursday, AMD disclosed plans for its next-gen MI350 chip, aimed at competing with Nvidia's new Blackwell system, with shipping anticipated in the second half of 2025. According to AMD, the Instinct MI350 series accelerators, built on the CDNA 4 architecture, are expected to offer up to 35 times better inference performance compared to their CDNA 3-based predecessors. The MI350 series also aims to maintain its leadership in memory capacity, boasting up to 288GB of HBM3E memory per accelerator. See Also: Mark Cuban On How TV Will Evolve In The Future, Says He Saw The Rise Of Netflix-Like Platforms Before They Existed: 'We Were YouTube Before YouTube' AMD CEO Lisa Su's vision is for AMD to become the "end-to-end AI leader" within the next decade, reported Financial Times. "You have to be extremely ambitious," she said, adding, "This is the beginning, not the end of the AI race." Despite trailing Nvidia in AI chip sales, AMD has emerged as Nvidia's closest competitor in the race to offer off-the-shelf AI chips. The company projects the total addressable market for AI chips to hit $400 billion by 2027. "When we first started, that was viewed as a really big number," Su said. "And I think people are moving towards our big number because of the tremendous demand there is for AI infrastructure." Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox. Why It Matters: On the same day, at the "Advancing AI 2024" event, AMD launched several high-performance computing solutions, including the fifth Gen AMD EPYC server CPUs, AMD Instinct MI325X accelerators, and AMD Pensando Salina DPUs, among others. Meanwhile, Nvidia has been making waves with its next-gen Blackwell GPU platform. Nvidia CEO Jensen Huang previously confirmed that Blackwell is in full production and progressing as planned, with demand for the platform described as "insane." This development coincides with Nvidia's customers preparing to roll out the Blackwell system in the current quarter. Microsoft Corporation announced earlier this week that it had become the first cloud provider to make the new GB200 chips available to its clients. Previously, it was also reported that at the Consumer Electronics Show 2025, Huang is scheduled to deliver a keynote address. The timing of the event aligns with speculation about Nvidia's next-generation GPUs expected to launch in 2025. It's also anticipated that Nvidia may provide updates on its RTX 50 series, including desktop GPUs built on the Blackwell architecture Check out more of Benzinga's Consumer Tech coverage by following this link. Read Next: Tesla Robotaxi Ambitions Come As Most Americans Grow Wary Of Autonomous Tech Disclaimer: This content was partially produced with the help of Benzinga Neuro and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs
[19]
Analyst updates AMD stock price forecast ahead of AI event
Lisa Su has two words for people who think artificial intelligence is overhyped. "Completely wrong," the Advanced Micro Devices (AMD) chief executive said during an interview with Time. Don't miss the move: Subscribe to TheStreet's free daily newsletter "Having lived in the technology business for the last couple of decades, every 10 years or so we see a major arc in technology, whether it was the beginning of the internet or the beginning of the PC or the beginning of mobile phones or the beginning of the cloud," she said. Su said AI is bigger than all those "in terms of how it can really impact our daily lives, our productivity, our business, our research -- all of those things." "And we're at the very beginning of the cycle," she added. Su said that people who subscribe to the theory that AI is a bubble are thinking too narrowly as they look for a return on investment right now or over the next six months. AMD CEO sees AI arc over the next 5 years "I think you have to look at this technology arc for AI over the next five years, and how does it fundamentally change everything that we do?" she said. "And I really believe that AI has that potential." When she took over as AMD's top executive, Su said a major piece of the company's strategy was to become a high-performance-computing leader. Related: Analyst updates AMD stock forecast ahead of AI conference "What makes this a fun job is the technology that we're working on is impacting the lives of billions of people," she said. "Most of the things that you do in a day, somewhere, it goes through an AMD processor." Su said that probably the biggest thing that sets the company's strategy apart "is that we really believe in end-to-end AI in every aspect." "There are companies that are working on some aspect of AI," she said. "Our view is, 'hey, AI is going to be everywhere. AI is going to be throughout our entire product portfolio.' " During the interview, Su discussed AMD's MI300X, a graphics-processing unit designed to support generative artificial intelligence technologies. Generative AI uses machine learning to derive original content from existing text, photos, video and audio. "When we launched our MI300X last year, we had Microsoft, we had Meta, we had Oracle, as key marquee partners that we've come together and built great solutions [with], and that's what sets us apart in how we approach the market," she said. And now AMD is asking people to save the date of Oct. 10 when the company is scheduled to host "Advancing AI 2024." That's an in-person and livestreamed event designed to showcase the next-generation AMD Instinct accelerators and 5th Generation AMD EPYC server processors. The gathering will also feature networking and AI PC updates, highlighting AMD's growing AI solutions ecosystem. AMD's stock is up 147% year-to-date and a team of analysts at Bank of America led by Vivek Arya issued a research report about the company ahead of the October event. The firm, which maintained a buy rating and $180 price target on AMD shares, said that last's year AI event in December produced 19% to 80% stock returns one to three months later. Analyst: AMD can ride AI market "AMD is off to a remarkable start but it could be tougher to carve a bigger niche between Nvidia's (NVDA) 80% to 85%+ share, cloud incumbency, 15 year+ software-developer lead on one extreme, and the roughly 10% market share presence of cost-optimized custom application-specific integrated circuits (ASICs) from Broadcom (AVGO) and Marvell Technology (MRVL) on the other extreme," B of A said. If, however, AMD is able to show a path to 10% AI share by calendar 2026, the company would conceptually add around $5 billion (on top of $12.6 billion) in sales, with scenario EPS of around $8 to $9, compared with the consensus at $7.37, the investment firm said. More AI Stocks: B of A said that AMD is facing competition not just in AI. The stock is also exposed to pressures from Intel (INTC) , near-term sluggish PC demand, longer-term rising competition from ARM-based rivals in servers and PC CPUs, and profitability in AI silicon as it becomes harder to raise prices or pass along the rising cost of high-bandwidth memory. The firm noted that the timing of the AI event is interesting in that it takes place before AMD's scheduled earnings report -- slated for the end of October -- "in what is normally the quiet period for the company." B of A said it was maintaining its buy rating because AMD can capitalize in the PC/server central-processing-unit market by taking share from Intel, which "remains in turmoil with frequent restructuring," and by riding the expanding AI market, "where the leader, Nvidia, continues to expand the addressable market that is always looking for alternative merchant and ASIC suppliers." Last month, Wells Fargo affirmed an overweight rating on AMD with a $205 price target after the company said Oracle Cloud Infrastructure (ORCL) would be deploying the MI300X GPUs to power its newest Oracle Cloud Infrastructure Compute Supercluster instance. The investment firm said the news was an "incremental positive." AMD's news release specifically highlights the MI300X GPUs as well positioned for inference capabilities, a theme the company will emphasize at its Oct. 10 event, Wells Fargo said. Related: The 10 best investing books, according to our stock market pros
[20]
What You Need To Know Ahead of AMD's Advancing AI Event
Analysts said the event could be a "catch-up" catalyst for AMD in the AI accelerator market. Advanced Micro Devices (AMD) is gearing up to showcase new tech at its 2024 Advancing AI event on Thursday, which Bank of America analysts said could be a "catch-up catalyst" for the chipmaker. AMD is expected to highlight its line of Instinct GPU accelerators and EPYC server processors at the event, and could potentially offer a glimpse into how it plans to capture a larger share of the AI accelerator market. Analysts expect a followup to AMD's MI300 series of accelerators launched in the fourth quarter of 2023. The company is off to a strong start in its first year of accelerator sales, Bank of America analysts said in a note Wednesday, guiding for more than $4.5 billion in sales this year. The big question is how large of a market share can it command with Nvidia (NVDA) dominating the AI sector. The current analyst consensus suggests that AMD is expected to hold a roughly 5% to 7% share of the AI accelerator market over the next couple years (while Nvidia's share is north of 80%). However, if it could show a path to 10% by the end of 2026, the company would add about $5 billion in sales, Bank of America said. What could help is the announcement of high-profile companies that use AMD's MI300 series accelerators. Analysts said they believe AMD's MI300X is already used by Microsoft (MSFT), Oracle (ORCL), and Meta (META), and others could be announced at the event. Shares of AMD jumped nearly 10% the day after last year's AI event in December, and could surge again after this year's event. The stock climbed nearly 5% Friday to $170.90 and is up about 16% so far this year, thanks to surging AI demand.
[21]
Advanced Micro Devices falls despite unveiling new AI, server chips By Investing.com
Investing.com -- Advanced Micro Devices detailed plans Thursday to ramp production of its new artificial intelligence M1325X chip starting in the fourth quarter as the chipmaker looks to take the fight to rival Nvidia (NASDAQ:NVDA) and cash into an increasing wave of enterprising spending on AI-related hardware. The MI325X chip, which is powered by the same architecture that is available MI300X, features a new variation of memory that will speed AI-related tasks, AMD said at its AI event in San Francisco. Vendors including Super Micro Computer (NASDAQ:SMCI) would begin shipping AMD's new AI chip, which the chipmaker hopes will help it compete with Nvidia's Blackwell AI chips, to customers in Q1 2025. Looking ahead to the back of next year, AMD said plans to launch its next-gen MI2350 chips that will be more powerful that the MI325 series chips as it performance will be enhanced by increased memory and new architecture. The company also launched a new version of its server chip, which were formerly codenamed Turin. Advanced Micro Devices Inc (NASDAQ:AMD) shares, however, failed to cut losses and were recently down more than 3%.
[22]
AMD launches 5th Gen EPYC processors with leading performance By Investing.com
SAN FRANCISCO - Advanced Micro Devices, Inc. (NASDAQ: NASDAQ:AMD) announced today the release of its 5th Gen AMD EPYC processors, which are designed to provide unmatched performance and efficiency for a wide array of data center workloads. The new processors, using the Zen 5 core architecture, are compatible with the SP5 platform and offer a variety of core counts ranging from 8 to 192. The top-end 192-core EPYC 9005 Series processor delivers up to 2.7 times the performance compared to competing products, while the 64-core AMD EPYC 9575F is specifically tailored for AI solutions requiring high CPU capabilities. The EPYC 9575F boosts up to 5GHz, surpassing the 3.8GHz offered by its competitors, and provides up to 28% faster processing to support demanding AI workloads. Dan McNamara, senior vice president and general manager of AMD's server business, emphasized the company's commitment to meeting the needs of the data center market. He highlighted AMD's track record of on-time roadmap execution and the trust customers place in AMD for performance, innovation, and energy efficiency. The new processors are engineered to handle various server workloads, from corporate AI initiatives to large-scale cloud infrastructures. The Zen 5 core architecture offers up to 17% better instructions per clock for enterprise and cloud workloads, and up to 37% higher IPC in AI and high-performance computing compared to the previous generation. AMD claims significant performance improvements over the Intel (NASDAQ:INTC) Xeon 8592+ CPU-based servers, with up to 4 times faster results in business applications like video transcoding, and up to 3.9 times quicker insights for science and HPC applications. The 5th Gen AMD EPYC processors also aim to provide fast deployment and insights for AI, with the 192-core EPYC 9965 CPU achieving up to 3.7 times the performance on AI workloads. The AMD EPYC 9575F can help a 1,000 node AI cluster drive up to 700,000 more inference tokens per second, enabling more efficient AI processing. By transitioning to data centers powered by these new processors, AMD suggests customers can achieve substantial performance for various workloads while using an estimated 71% less power and ~87% fewer servers. The full lineup of 5th Gen AMD EPYC processors is available now, with support from major server manufacturers and cloud service providers, offering an easy upgrade path for organizations seeking compute and AI leadership. This announcement is based on a press release statement from AMD. In other recent news, Advanced Micro Devices (AMD) has revealed its third-generation Ryzen AI PRO 300 Series mobile processors, boasting significant AI performance improvements. The new processors are expected to hit the market later this year. In addition, AMD has introduced a suite of products aimed at enhancing AI infrastructure performance, including the AMD Instinct MI325X accelerators, expected to start shipping in late 2024. These announcements follow AMD's strategic collaboration with Oracle (NYSE:ORCL) Cloud Infrastructure (OCI), where AMD's Instinct MI300X accelerators will power OCI's new AI supercluster. Analysts from Cantor Fitzgerald and Goldman Sachs (NYSE:GS) have maintained positive ratings on AMD, highlighting these significant developments. AMD's Q2 revenues surpassed Street consensus, reaching $5.835 billion, with its data center segment showing record revenue growth of 115% to $2.8 billion. Despite AMD's strides in the AI chip market, it is not expected to significantly affect Nvidia's data center revenue, as the demand for AI chips currently surpasses supply. These recent developments reflect AMD's commitment to providing high-performance computing solutions for AI applications. AMD's latest announcement of its 5th Gen EPYC processors aligns well with the company's strong market position and financial performance. According to InvestingPro data, AMD boasts a substantial market capitalization of $266.13 billion, reflecting investor confidence in its growth potential and technological advancements. The company's revenue growth of 6.4% over the last twelve months and a quarterly growth of 8.88% in Q2 2024 indicate a steady expansion, which is likely to be further bolstered by the release of these high-performance processors. This growth trajectory is supported by an InvestingPro Tip suggesting that AMD's net income is expected to grow this year. AMD's gross profit margin of 51.42% demonstrates its ability to maintain profitability while investing in cutting-edge technology like the new EPYC processors. This is particularly important as the company positions itself to capture a larger share of the AI and data center markets. InvestingPro Tips also highlight that AMD is a prominent player in the Semiconductors & Semiconductor Equipment industry, which is evident from its leadership in processor technology. The company's strong return over the last year, with a 56.88% price total return, reflects market enthusiasm for AMD's innovations and growth prospects. For investors seeking more comprehensive analysis, InvestingPro offers 15 additional tips for AMD, providing a deeper understanding of the company's financial health and market position.
[23]
MiTAC Computing Leverages Latest AMD Enterprise Technologies Offering Leadership Performance and Density for AI-Driven Data Center Workloads By Investing.com
, /PRNewswire/ -- MiTAC Computing Technology Corporation, an industry-leading server platform design manufacturer and a subsidiary of MiTAC Holdings Corporation (TSE:3706), today announced the launch of its new high-performance servers, featuring the latest AMD EPYCâ„¢ 9005 Series CPUs and AMD Instinctâ„¢ MI325X accelerators. "AMD is the trusted data center solutions provider of choice for leading enterprises worldwide, whether they are enabling corporate AI initiatives, building large-scale cloud deployments, or hosting critical business applications on-premises," said , senior vice president, Server Business Unit, AMD. "Our latest 5th Gen AMD EPYC CPUs provide the performance, flexibility and reliability " with compatibility across the x86 data center ecosystem " to deliver tailored solutions that meet the diverse demands of the modern data center." "AMD is driving AI innovation at scale across diverse markets," said , President of MiTAC Computing Technology Corporation. "As a long-term technology partner of AMD, we're excited to launch our new AI Server, powered by the latest AMD EPYC 9005 Series Processors and AMD Instinct MI325X accelerators. Together, we're showcasing AI's transformative impact across industries." AMD EPYCâ„¢ 9005 Processors: Out-of-the-box performance and density leadership for the growing demands of AI-enabled, business-critical data center workloads AMD EPYCâ„¢ 9005 Series processors are the newest generation of the powerful and efficient AMD EPYC processor family for servers that have set hundreds of performance and efficiency world records. Advancements in the EPYC 9005 Series processor family are enabled by the breakthrough high performance, highly efficient "Zen 5" core architecture and advanced microprocessor process technologies to better meet the needs of the modern AI-enabled data center. The complete line of processor offerings include a wide range of core counts (up to 192 cores, 384 threads per CPU), frequencies (up to 5 GHz), cache capacities, energy efficiency levels and competitive cost points-- all complemented by the familiar x86 software compatibility that allow AMD EPYC 9005 Series CPU-based servers to readily support almost any business need. MiTAC Computing customers experience optimized performance and density leadership with 5th Gen AMD EPYC CPUs With AMD EPYCâ„¢ 9005 Series CPUs at their core, MiTAC Computing's new servers are crafted for AI-driven, intensive workloads. They offer out-of-the-box performance and efficiency, allowing users to meet increasing demands while minimizing space and energy consumption. For AI/HPC products, MiTAC Computing presents three models. Featuring a standard 2U (OTC:TWOUQ) form factor, the MiTAC TYAN TN85-B8261 is a dual-socket GPU server designed specifically for HPC and deep learning. It supports up to four dual-slot GPU cards, 24 DDR5 RDIMM slots, and eight 2.5-inch hot-swap NVMe U.2 drive trays. Next is the 4U single-socket GPU server, the MiTAC TYAN FT65T-B8050. This model accommodates up to two high-performance GPU cards, features eight DDR5 RDIMM slots, and includes eight 3.5-inch SATA and two 2.5-inch NVMe U.2 hot-swap drive trays. Finally, the MiTAC G8825Z5 is an 8U dual-socket server that supports the all-new AMD Instinctâ„¢ MI325X GPU accelerators. It also offers eight 2.5-inch hot-swap U.2 drive trays and supports up to 4TB of DDR5 6000 memory, making it ideal for large-scale AI and HPC infrastructures. And, there are two 2U models for cloud storage: the MiTAC TYAN TS70-B8056 and TS70A-B8056. The TS70-B8056 has 12 front 3.5-inch drive trays and two rear 2.5-inch hot-swap NVMe U.2 trays, while the TS70A-B8056, designed for high IOPS, features 26 2.5-inch hot-swap NVMe U.2 trays. Also available is the efficient and space-saving MiTAC TYAN GC68C-B8056, a 1U single-socket cloud server equipped with 24 DDR5 registered DIMM slots and 12 tool-less 2.5-inch NVMe U.2 hot-swap drive bays. Furthermore, MiTAC Computing introduces two single-socket motherboards: the compact MiTAC TYAN S8050 and the rack-optimized MiTAC TYAN S8056. About MiTAC Computing Technology Corporation MiTAC Computing Technology Corporation, a subsidiary of MiTAC Holdings Corp. (TSE:3706), specializes in cloud, AI/HPC and edge computing solutions and has over 30 years of design and manufacturing expertise. With a strong focus on large-scale data centers, MiTAC offers flexible and customized solutions for various systems and applications. Our product lineup includes TYAN servers, ORAN servers, high-performance AI servers, and other data center products. AMD, the AMD Arrow logo, AMD Instinct, EPYC, and combinations thereof are trademarks of Advanced Micro Devices (NASDAQ:AMD), Inc. Other names are for informational purposes only and may be trademarks of their respective owners.
[24]
AMD's AI product event likely to be 'a fade-the-news event' says analyst By Investing.com
Investing.com -- AMD (NASDAQ:AMD) is set to host its "Advancing AI" event this week, where analysts expect to hear about new product announcements that could lift sales and drive the chipmaker's stock price. Meanwhile, analysts at Lynx Equity Strategies believe this will be "a fade-the-news event." "AMD stock has had a modest run-up over the past week on investor expectations for clarity on MI325X and its follow-ons. We think investors are likely to be disappointed," they emphasized. Lynx's team is skeptical about AMD's ability to close the gap with Nvidia (NASDAQ:NVDA), noting that the latter continues to dominate the market. Despite AMD's ambitious claims at the MI300X launch nearly a year ago, it has made minimal progress in challenging NVIDIA's stronghold. "We do not expect MI325X to change the narrative meaningfully," the analysts state, suggesting that AMD is likely to "remain a distant second" to its rival. While AMD is expected to highlight developments in data center CPUs and AI PCs, Lynx believes investor attention will remain firmly on the MI-series. The analysts warn that unless AMD can demonstrate a clear driver for market share gains, the stock could potentially retest its yearly lows. After AMD's Computex presentation a few months ago, there was some optimism around the company's next-gen data center GPUs, especially given the higher HBM density on AMD's MI325X -- 288GB compared to NVIDIA's 141GB on the H200. However, Lynx notes that customer traction has been weak in the months since. Although AMD management may boast superior benchmark results compared to NVIDIA's Hopper series, the firm's analysts remain unconvinced, noting that "while running real workloads in the field, we think MI300X continues to trail NVDA's H100." "We do not expect MI325X to change the narrative meaningfully. Superior HBM density is merely one of the factors at work; the software stack, backplane networking and power consumption matter just as much," analysts continued. They believe AMD has high hopes for the MI325X and is expanding its chip-on-wafer-on-substrate (CoWoS) capacity in anticipation of future demand. Notably, Broadcom Inc (NASDAQ:AVGO)'s strong guidance for its ethernet connectivity business at GPU-based AI data centers next year could indicate growing demand from AMD AI servers. Microsoft (NASDAQ:MSFT) also appears to be a key customer, though Lynx remains uncertain about "who the needle-movers are." Adding to the uncertainty, Meta Platforms (NASDAQ:META), which endorsed AMD's launch of MI300X last year, may have seen its commitment to AMD diminish recently. "We believe META may have firmed up its plans for internally developed silicon as a future alternative to NVDA. And Amazon/AWS too, we doubt if AMD is seen as a viable alternative to NVDA," Lynx explains. The firm also mentions Oracle (NYSE:ORCL) as a potential customer, along with interest from enterprises, but it cautions that the MI-series, along with its software stack, may not be the plug-and-play solution AMD has marketed it as. "Outside of MSFT we do not believe end customers are likely to spend resources to help bridge the gap as it is unclear what would be the payoff," said the analysts. "We expect the stock to trade down after the event today and likely to revisit the lows of the year." As for potential surprises at the event, Lynx suggests that while Amazon (NASDAQ:AMZN) or Google (NASDAQ:GOOGL) might be announced as new customers, this would likely be limited to externally facing cloud instances.
[25]
What Wall Street Thinks of AMD Stock Ahead of Thursday's 'Advancing AI' Event
Analysts expect AMD to introduce several products during the event, along with potentially revealing some new partnerships. Advanced Micro Devices (AMD) will hold its "Advancing AI" event this week, with analysts expecting new product announcements that could drive the stock higher and lift sales. AMD shares are more than 15% above where they started the year, though off the record levels set in March. Thirteen analysts tracked by Visible Alpha have a "buy" or equivalent rating on AMD stock, while three have a "hold" rating. AMD stock has an average target price of $190.56, meaning analysts broadly expect AMD shares will rise a further 11%. AMD closed Monday at just under $171 after changing only slightly tostart the week. The company's shares rose nearly 20% in the days following a similar event last December. Analysts have said they expect AMD to announce new products at the Thursday event, which could pave the way for it to gain market share in the artificial intelligence (AI) accelerator space as it faces off against Nvidia (NVDA), the industry leader. Analysts said the "AI ecosystems partners, customers and developers" that AMD mentioned in the event's announcement could include new customers -- evidence of AMD's growing market share.
Share
Share
Copy Link
AMD announces its new MI325X AI accelerator chip, set to enter mass production in Q4 2024, aiming to compete with Nvidia's upcoming Blackwell architecture in the rapidly growing AI chip market.
Advanced Micro Devices (AMD) has unveiled its latest artificial intelligence chip, the Instinct MI325X, in a bold move to challenge Nvidia's dominance in the AI processor market. The announcement, made at an event in San Francisco, positions AMD as a serious contender in the rapidly expanding field of AI computing 12.
The MI325X boasts impressive specifications, including 153 billion transistors and 19,456 stream processors. Built on TSMC's 5nm and 6nm FinFET processes, the chip delivers up to 2.61 PFLOPs of peak eight-bit precision performance 3. AMD CEO Lisa Su claimed that the MI325X platform offers "up to 40% more inference performance than the H200 on Llama 3.1," referring to Meta's large-language AI model 24.
AMD is accelerating its product schedule, aiming to release new chips annually to compete with Nvidia and capitalize on the AI chip boom. The company has outlined plans for future generations, with the MI350 slated for 2025 and the MI400 for 2026 24. This aggressive strategy is designed to capture a significant portion of the AI chip market, which AMD projects will be worth $500 billion by 2028 2.
The introduction of the MI325X could potentially put pricing pressure on Nvidia, which has enjoyed high profit margins in the AI chip market. AMD's entry is seen as a potential catalyst for increased competition and innovation in the sector 24. However, AMD faces challenges in overcoming Nvidia's established ecosystem, particularly its CUDA programming language, which has become an industry standard 4.
In addition to the MI325X, AMD announced improvements to its ROCm software toolkit and new networking technology, aiming to provide comprehensive AI infrastructure solutions 5. The company also unveiled its EPYC 5th Gen CPUs, targeting the data center market with configurations ranging from 8 to 192 cores 24.
Despite the announcement, AMD's stock fell 3% during trading on Thursday, while Nvidia's shares rose 1.5% 1. Investors and analysts are closely watching AMD's progress in the AI chip market, as the company aims to narrow the gap with Nvidia, which currently holds over 90% market share 24.
Lisa Su expressed confidence in AMD's long-term prospects, stating, "This is the beginning, not the end of the AI race" 3. The company's ambitious goal is to become the "end-to-end AI leader" over the next decade 5. As demand for AI infrastructure continues to grow, AMD is positioning itself to capitalize on what it predicts will be a $400 billion market by 2027 5.
Reference
[1]
[2]
[3]
[5]
AMD unveils its next-generation AI accelerator, the Instinct MI325X, along with new networking solutions, aiming to compete with Nvidia in the rapidly growing AI infrastructure market.
16 Sources
16 Sources
AMD's AI GPU business, led by the Instinct MI300, has grown rapidly to match the company's entire CPU operations in revenue. CEO Lisa Su predicts significant market growth, positioning AMD as a strong competitor to Nvidia in the AI hardware sector.
4 Sources
4 Sources
AMD is fast-tracking the release of its most powerful AI GPU, the Instinct MI350, to mid-2025 in an effort to compete with Nvidia's Blackwell series and capture a larger share of the booming AI hardware market.
3 Sources
3 Sources
AMD's CEO Lisa Su emphasizes the company's accelerated AI roadmap and the ongoing AI industry growth. She discusses AMD's strategic positioning and future plans in the rapidly evolving AI market.
2 Sources
2 Sources
AMD reports strong Q2 2024 earnings, driven by exceptional AI chip sales and data center growth. The company's Instinct MI300 accelerators gain traction in the AI market, challenging NVIDIA's dominance.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved