Curated by THEOUTPOST
On Tue, 7 Jan, 8:07 AM UTC
37 Sources
[1]
Jensen Huang Declares the Age of "Agentic AI" at CES 2025 - A Multi-Trillion-Dollar Shift in Work and Industry
The Consumer Electronics Show (CES) 2025 kicked off with a bang as Nvidia CEO Jensen Huang took the stage to unveil the company's latest innovations. Sporting a flashy jacket he joked was perfect for Vegas, Huang took the audience through Nvidia's past, present and ambitious plans for the future. The spotlight? The rise of Agentic AI. "The age of AI Agentics is here," Huang announced, describing a new wave of artificial intelligence. Unlike generative AI, which creates content and tools, Agentic AI revolves around intelligent agents capable of assisting with tasks across industries. Don't Miss: The global games market is projected to generate $272B by the end of the year -- for $0.55/share, this VC-backed startup with a 7M+ userbase gives investors easy access to this asset market. With 100+ historic trademarks including some of the high grossing characters in history, like Cinderella, Snow White and Peter Pan, this company is transforming the $2 trillion entertainment market with patented AR, VR, and AI tech. -- For a short window, investors are able to claim $2/share ($980 min). Huang called this shift a "multi-trillion-dollar opportunity," painting a bold picture of AI-driven workflows in medicine, human resources and software engineering. "AI agents are the new digital workforce," he said, predicting a future where every company's IT department will effectively become HR for AI agents. The keynote wasn't all talk. Huang revealed Nvidia's latest GPU lineup, the RTX Blackwell series, which he described as a game-changer. The GeForce RTX 5070 leads the pack, offering the same performance as the previous generation's flagship RTX 4090 but at a jaw-dropping $549 - far below the 4090's $1,599 price tag. "This is impossible without artificial intelligence," Huang explained, crediting AI-driven engineering for the efficiency boost. Other models include the RTX 5070 Ti, which costs $749, the midrange RTX 5080, which costs $999 and the flagship RTX 5090, which costs $1,999. With features like RTX Neural Shaders for lifelike rendering and DLSS 4 upscaling technology, these GPUs are designed to handle everything from gaming to complex AI workloads. See Also: Deloitte's fastest-growing software company partners with Amazon, Walmart & Target - Many are rushing to grab 4,000 of its pre-IPO shares for just $0.26/share! In a demonstration, DLSS 4 rendered a scene at 238 frames per second, over eight times faster than traditional methods, all while keeping latency to just 34 milliseconds. Nvidia didn't stop there. Huang unveiled Project Digits, a personal AI supercomputer powered by the Grace Blackwell Superchip. It promises a petaflop of computing power, bringing supercomputing capabilities to AI researchers, students and developers right at their desks. Priced starting at $3,000, Project Digits is scheduled for release in May 2025. "AI will be mainstream in every application for every industry," Huang said, describing Project Digits as a tool to empower millions to innovate in the age of AI. Autonomous machines were also a key highlight. Huang introduced "Cosmos," a foundation model designed to accelerate training for AI systems in robotics and autonomous vehicles. Toyota will integrate Nvidia's Orin chips and DriveOS operating system into its next-generation autonomous vehicles to improve their advanced driver assistance systems. Huang boldly predicted that "a trillion miles driven each year will soon be either highly or fully autonomous," calling it the first "multi-trillion-dollar robotics industry." Read Next: Maker of the $60,000 foldable home has 3 factory buildings, 600+ houses built, and big plans to solve housing -- you can become an investor for $0.80 per share today. Inspired by Uber and Airbnb - Deloitte's fastest-growing software company is transforming 7 billion smartphones into income-generating assets - with $1,000 you can invest at just $0.26/share! Market News and Data brought to you by Benzinga APIs
[2]
Jensen Huang Says Nvidia Is a â€~Technology Company,’ but It's Really an AI Company
Nvidia and Jensen Huang took over CES 2025 with the RTX 50-series debut, but the hardware is a vehicle for AI ambitions. Nvidia is a “technology company,†not a “consumer†or “enterprise†company, as emphasized by CEO Jensen Huang. What does he mean, exactly? Doesn’t Nvidia want consumers to spend hundreds or thousands of dollars on the new, expensive RTX 50-series GPUs? Don’t they want more companies to buy their AI training chips? Nvidia is the kind of company with a lot of fingers in a lot of pies. To hear Huang tell it, if the crust of those pies is the company’s chips, then AI is the filling. â€Our technology influence is going to impact the future of consumer platforms,†Huangâ€"clad in his typical black jacket and the warm bosom of AI hypeâ€"said in a Q&A with reporters a day after his blowout opening CES keynote. But how does a company like Nvidia fund all those epic AI experiments? The H100 AI training chips made Nvidia such a tech powerhouse over the past two years, with a few stumbles along the way. But Amazon and other companies are trying to create alternatives to cut out Nvidia’s monopoly. What should happen if competition cuts the spree short? “We’re going to respond to customers wherever they are,†Huang said. Part of that is helping companies build “agentic AI,†AKA multiple AI models able to complete complex tasks. That includes several AI toolkits made to throw a bone to businesses. While the H100 has made Nvidia big, and RTX keeps gamers coming back, it wants its new $3,000 “Project Digits†AI processing hub to open up “a whole new universe†for those who can use it. Who will use it? Nvidia said it’s a tool for researchers, scientists, and maybe studentsâ€"or at least those who stumble across $3,000 in their cup of $1.50 instant ramen they’re eating for dinner for the fifth night in a row. Nvidia made sure you knew about the RTX 5090’s 3,352 TOPS of AI performance. Then, Huang’s company dropped details on several software initiativesâ€"both gaming and non-gaming related. None of his declarations were more confusing than its “world foundation†AI models. These models should be able to train on real-life environments, which could be used for helping autonomous vehicles or robots navigate their environment. It’s a lot of future tech, and Huang admitted he failed to better articulate it to a crowd who had mostly come to see cool new GPUs. “[The world foundation model] understands things like friction, inertia, grabbing, object presence, and elements, geometric and spatial understanding,†he said. “You know, the things that children know. They understand the physical world in a way that language models didn't know.†Huang opened up CES 2025 on Jan. 6 with a keynote that packed the Michelob Ultra arena in Las Vegas’ Mandalay Bay casino. There was certainly a huge portion of gamers who'd come to see the latest RTX 50-series cards in the flesh, but more were there to see how a company as lucrative as Nvidia moves forward. RTX and Project Digits drew hollers and shouts from the crowd. Spending half his time talking about his world foundation model, the audience didn’t seem nearly as enthused. It points to how awkward AI messaging can be, especially for a company that bears much of its popularity to the attentive population of PC gamers. There has been so much talk about AI that it’s easy to forget Nvidia was in this game years before ChatGPT came on the scene. Nvidia’s in-game AI upscaling tech, DLSS, has been around for close to six years, improving all the time, and it’s now one of the best AI-upscalers in games, though limited by its exclusivity to Nvidia’s cards. It was good before the advent of generative AI. Now, Nvidia promises Transformer models will further enhance upscaling and ray reconstruction. To top it off, the touted multi-frame gen could possibly grant four times the performance for 50-series GPUs, at least if the game supports it. That is a boon for those who can afford the new RTX 50-series. The RTX 5090 tops off at $2,000. The gamers who would most benefit from frame gen are those who may only afford a lower-end GPU. Huang declined to offer any hints about an RTX 5050 or 5060, joking “We announced four cards, and you want more?†The world foundation model is just a prototype, just like much of Nvidia’s new AI software on display to the public. The real questions are, when will it be ready for primetime, and who will end up using it? Nvidia showed off oddball AI NPCs, in-game chatbots, AI nurses, and an audio generator last year. This year, it wants to bloom with its world foundation model, plus a host of AI “microservices,†including a weird animated talking head that’s supposed to serve as your PC’s always-on assistant. Perhaps, some of these will stick. In the cases where Nvidia hopes AI replaces nurses or audio engineers, we hope that doesn’t happen. Huang considers Nvidia “a small company†with 32,000 worldwide employees. Yes, that’s less than half of the staff Meta has, but you can’t think of it as small in terms of the market influence for AI training chips. Because of its market position, it holds an outsized influence on the tech industry. The more people using AI, the more people will need to buy its AI-specific GPUs, plus any of its other AI software. If everybody buys their own at-home AI processing chip, they don’t have to rely on outside data centers and external chatbots. Nvidia, just like every tech company, just needs to find a use for AI beyond replacing all our jobs.
[3]
Nvidia goes all in on AI agents and humanoid robots at CES
As the AI world races toward next-generation breakthroughs, Nvidia (NVDA+3.84%) fortified its position with a flood of new chips, software and services designed to keep the industry plugged into its expanding tech ecosystem. Nvidia chief executive Jensen Huang announced a suite of AI tools and updates during the first keynote speech at the Consumer Electronics Show (CES) on Monday, focusing on AI agents -- systems that can complete tasks autonomously. Huang demonstrated these capabilities through animated versions of himself in different outfits, showing how AI agents could handle roles like customer service, coding, and research assistance. "The IT department of every company is going to be the HR department of AI agents in the future," Huang said in the keynote. The chipmaker unveiled AI Blueprints, which will help companies build and deploy these AI agents using technology built on Meta's (META+3.38%) Llama models. These "knowledge robots," as Nvidia describes them, can analyze large amounts of data, summarize information from videos and PDFs, and take actions based on what they learn. To make this happen, Nvidia partnered with five leading AI companies -- CrewAI, Daily, LangChain, LlamaIndex, and Weights & Biases -- who will help integrate Nvidia's technology into usable tools for businesses. "These AI agents act like 'knowledge robots' that can reason, plan and take action to quickly analyze large quantities of data, summarize and distill real-time insights from video, PDF and other images," Justin Boitano, Nvidia's vice president of enterprise AI software products said in a statement. Nvidia also made a major push into robotics, unveiling new tools to help companies simulate and deploy robot workforces. The centerpiece is "Mega," a new Omniverse Blueprint that lets companies develop, test, and optimize robot fleets in virtual environments before deploying them in real warehouses and factories. "The ChatGPT moment for general robotics is just around the corner," Huang said during his keynote. To back this claim, Nvidia announced a collection of robot foundation models, including new capabilities for generating synthetic motion data for training humanoid robots. The company says these pre-trained models were developed using massive amounts of data, including millions of hours of autonomous driving and drone footage. A key part of this effort is the new Isaac GR00T Blueprint, which helps solve a major challenge in robotics: generating the massive amounts of motion data needed to train humanoid robots. Instead of the expensive and time-consuming process of collecting real-world data, developers can use GR00T to generate large synthetic datasets from just a small number of human motions. "Collecting these extensive, high-quality datasets in the real world is tedious, time-consuming, and often prohibitively expensive," Nvidia said in a press release. With the Isaac GR00T blueprint, developers can generate the large data sets needed to train AI models from a small number of human motions. The new tools are already attracting major industry players. KION Group, a supply chain solutions company, is working with Accenture (ACN-0.91%) to use Nvidia's Mega blueprint to optimize warehouse operations that involve both human workers and robots. Nvidia's efforts into the physical world didn't stop there: it announced the expansion of its automobile partners, including with the world's largest automaker, which will use its accelerated computing and AI for driving assistance capabilities and autonomous vehicles. Toyota (TM-2.56%) will use Nvidia's DRIVE AGX Orin system-on-a-chip (SoC) technology for its next-generation vehicles for driving assistance. Meanwhile, the chipmaker said Aurora (AUR-5.21%) and Continental will both use Nvidia's DRIVE accelerated compute to deploy driverless trucks through a long-term strategic partnership. On the consumer front -- it is CES, after all -- Huang showed off the next generation of its RTX Blackwell GPUs, with the four versions priced between $549 and $1,999. The popular gaming chips will be released later this month and in February. And to cap off the keynote, Huang lifted the curtain on Project DIGITS, a palm-sized personal AI supercomputer powerful enough to run AI models with up to 200 billion parameters - for comparison, GPT-3 has 175 billion parameters while GPT-4 is rumored to have 1.76 trillion parameters. The device, which will be available starting at $3,000 from Nvidia and its partners in May, aims to put AI development in more hands. "AI will be mainstream in every application for every industry," Huang said in a statement. "With Project DIGITS, the Grace Blackwell Superchip comes to millions of developers. Placing an AI supercomputer on the desks of every data scientist, AI researcher and student empowers them to engage and shape the age of AI."
[4]
Nvidia Is Officially on the A.I. Agents Train With New Family of LLM Models
"The IT department of every company is going to be the HR department of A.I. agents in the future," Jensen Huang said. Nvidia (NVDA) is going all-in on A.I. agents as it seeks to cash in on what CEO Jensen Huang calls a "multi-trillion dollar opportunity" that could change how IT teams operate. During his keynote speech at the 2025 Consumer Electronics Show (CES) yesterday (Jan. 6) in Las Vegas, Huang unveiled Nvidia's Nemotron family of large language models that developers can use to build A.I. agents -- autonomous bots that can perform complex tasks without human control. Sign Up For Our Daily Newsletter Sign Up Thank you for signing up! By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime. See all of our newsletters Built with Meta (META)'s Llama model, the open-sourced Nemotron models, including Nano, Super, and Ultra, allow developers to "create and deploy" A.I. agents they can fine tune to specific business needs, Nvidia said. Applications include customer support, fraud detection, and inventory management optimization. By deploying A.I. agents, companies can "achieve unprecedented productivity," according to the company. "In the future, these A.I. agents are essentially a digital workforce that are working alongside your employees doing things for you on your behalf," Huang said during his keynote. "The way you would bring these specialized agents into your company is to onboard them just like you would on-board an employee." In a live demonstration, Huang showcased examples of A.I. agents with digital replicas of his face. The A.I. research assistant agent for students, for instance, is designed to take documents like lectures, academic journals and financial results and synthesize them into an interactive podcast for "easy learning." The software security agent can scan software and alert developers if vulnerabilities are detected so they can take immediate action. Other A.I. agents displayed included roles like financial analyst, employee support and factory operations. "The IT department of every company is going to be the HR department of A.I. agents in the future," Huang said. Major companies are already lining up to use Nvidia's Nemotron models. One early user is the enterprise software giant SAP. Philipp Herzig, chief A.I. officer at SAP, said in a release the company expects "hundreds of millions of enterprise users" to interact with Nvidia's A.I. agents to "accomplish their goals faster than ever before," Another enterprise software giant, ServiceNow, will use Nvidia's new models to "build advanced A.I. agent services" that can solve a complex range of problems in a move to "achieve more with less effort," Jeremy Barnes, the company's vice president of platform A.I., said in a release. Nemotron is just one of many products Huang announced during his nearly two-hour-long keynote. The CEO also unveiled a new family of its latest Blackwell chips which the company claims is three times more powerful than previous generations of GPUs. The Blackwell family is expected to create a $100 billion market opportunity for Nvidia, Stifel analyst Ruben Roy wrote in a note last November. On top of that, Huang announced the GB200 NVl2, a super chip that it claims will supercharge the capacity for data centers to run A.I. workloads. Nvidia isn't just betting big on its highly sought-after chips. Huang also announced the Cosmos foundation model to advance physical A.I. like humanoid robots, the DRIVE Hyperion platform to train self-driving vehicles, and Project Digits, a $3,000 personal supercomputer that Nvidia says is 1,000 times more powerful than a typical laptop. Nvidia also announced two autonomous driving partnerships, one with Toyota and the other with Aurora, a self-driving truck startup. The latest A.I. developments cements Nvidia's position as a global leader in the A.I. revolution. Prior to Huang's speech, Nvidia's stock jumped more than 3.4 percent to a record high of $149.43 per share after market close in anticipation of the CES announcements. That surge sent Nvidia's market value up to $3.47 trillion, briefly surpassing Apple's. Nvidia's market value grew by over $2 trillion last year.
[5]
Nvidia CEO Jensen Huang: "AI Agents Likely to Be a Multitrillion-Dollar Opportunity" | The Motley Fool
Last Monday night, Nvidia (NVDA -3.00%) CEO Jensen Huang gave the opening keynote speech to kick off CES 2025, which ran until Friday in Las Vegas. Nvidia is the leader in providing chips -- primarily graphics processing units (GPUs) -- and related technology to enable artificial intelligence (AI) capabilities. So, Huang naturally spent much of his approximately 1.5-hour presentation on the topic of AI. He covered Nvidia's new AI-related products and partnerships along with how he sees the AI industry evolving. There was much great content in Huang's speech, but one of the most exciting things for Nvidia stock investors was this comment: "AI agents [are] likely to be a multitrillion-dollar opportunity." A group of AI agents is a "digital workforce," as Huang said last Tuesday at Nvidia's CES financial analyst conference. The development of AI agents is now possible due to generative AI, a relatively new technology that greatly increases the possible use cases of AI. Generative AI's amazing capabilities were first demonstrated to consumers in late 2022 with OpenAI's release of its ChatGPT chatbot. What differentiates a chatbot, which can be extremely useful in some situations, from an AI agent is the degree of autonomy. A chatbot can do things like answer questions, generate text, and help solve problems. But it is not capable of working independently or taking initiative, as an AI agent can do. On last quarter's earnings call, CFO Colette Kress summarized Nvidia's involvement in AI agents: Nvidia AI Enterprise, which includes Nvidia NeMo and NIM microservices, is an operating platform of agentic AI. Industry leaders are using Nvidia AI to build copilots and agents. Working with Nvidia, Cadence [Design Systems], Cloudera, Cohesity, NetApp, Salesforce, SAP, and ServiceNow are racing to accelerate development of these applications with the potential for billions of agents to be deployed in the coming years. Consulting leaders like Accenture and Deloitte are taking Nvidia AI to the world's enterprises. A couple of comments about this quote: First, while Nvidia AI Enterprise is the company's operating platform to create AI agents, it's not exclusively devoted to agentic AI. Second, the list of companies using Nvidia's technology to develop AI agents is not meant to be all-inclusive. The process varies, but how it generally works is that large enterprises use Nvidia AI Enterprise to create AI agents of their tool or tools. The enterprises will then rent out these agents to their customers via the cloud. Moreover, companies across various industries are expected to use AI agents to improve their own operations. Management expects "Nvidia AI Enterprise full-year revenue to increase over 2x from last year," Kress said on last quarter's earnings call. This fast growth is being driven partly by the rush among enterprises to develop AI agents. There is indeed a rush because top execs in many industries know that their businesses will suffer if they are slower than their competitors to use AI agents to improve their own businesses, and -- in the case of cloud-based software companies -- to offer AI agents to their customers. At Nvidia's CES financial analyst conference, Huang made this comment that highlights how fast he expects AI agents to be used in certain fields: "There are 30 million software engineers [globally]. Starting next year, if a software engineer in your company is not assisted with an AI [agent] you are losing already fast." Along with software engineers, Huang expects that folks who develop marketing campaigns will be among early adopters of AI agents. Moreover, he added that all knowledge workers -- a number he pegged at about 1 billion worldwide -- will eventually be assisted by AI agents. Huang didn't expand on what he meant by "multitrillion" dollars, but "multi" is generally defined as more than two. So, multitrillion should mean at least $3 trillion. Moreover, he didn't share how long he thought it would take the agentic AI market to reach that sum in annual revenue. But we know from his remarks that he expects this market to grow very rapidly. Let's go with "multitrillion" referring to $3 trillion, which is a conservative assumption. Here is some data to help put this massive number in context: Let's assume that Nvidia captures just 5% of what I'm estimating to be a $3 trillion agentic AI market. This is another conservative estimate, given Nvidia's dominance of AI-enabling chip and related technology. Five percent of $3 trillion is $150 billion. This would be nearly all new revenue since Nvidia is in the very early stages of making money from AI agents. In its third quarter of fiscal year 2025 (ended Oct. 27, 2024), Nvidia generated revenue of $35.1 billion, and Wall Street expects its fiscal year 2025 (ends late January) revenue will grow 112% year over year to $129.1 billion. These numbers illustrate how the AI agent market has the potential to absolutely turbocharge Nvidia's revenue growth, which is already impressive. Much higher revenue should lead to much higher earnings, which in turn, should help power Nvidia stock higher over the long term.
[6]
AI Agents in 2025: Nvidia's Vision for a Machine-Led Economy
Nvidia's CEO, Jensen Huang, has declared 2025 as the "Year of AI Agents," emphasizing their potential to automate tasks across industries. These agents are designed to execute simple instructions and complex multistep tasks, enhancing efficiency and productivity. Nvidia has already integrated AI agents into its chip design processes, showcasing their practical applications and benefits. At the Consumer Electronics Show (CES) 2025, Nvidia unveiled the GeForce RTX 50-series GPUs, powered by the Blackwell architecture. These GPUs are engineered to handle the intensive computational demands of AI applications, facilitating the development and deployment of sophisticated AI agents. The RTX 5090, priced at $1,999, and the RTX 5070, at $549, offer enhanced performance and cost-efficiency, making AI technology more accessible to a broader audience.
[7]
What You Need to Know About Nvidia's AI Announcements at CES 2025 - Decrypt
After a record-breaking 2024, Nvidia is kicking off 2025 with a bang, unveiling a slate of products that could solidify its dominance in the fields of AI development and gaming. CEO Jensen Huang took the stage at CES in Las Vegas to showcase new hardware and software offerings that span everything from personal AI supercomputers to next-generation gaming cards. Nvidia's biggest announcement: Project DIGITS, a $3,000 personal AI supercomputer that packs a petaflop of computing power into a desktop-sized box. Built around the new -- and up until now, secret -- GB10 Grace Blackwell Superchip, this machine can handle AI models with up to 200 billion parameters while drawing power from a standard outlet. For heavier workloads, users can link two units to tackle models up to 405 billion parameters. For context, the largest Llama 3.2 model, the most advanced open-source LLM from Meta, has 405 billion parameters and cannot be run on consumer hardware. Up until now, it required around 8 Nvidia A100/H100 Superchips, each one costing around $30K, totaling more than $240K just in processing hardware. Two of Nvidia's new consumer-grade AI supercomputers would cost $6K and be capable of running the same quantized model. "AI will be mainstream in every application for every industry. With Project DIGITS, the Grace Blackwell Superchip comes to millions of developers," Jensen Huang, CEO of Nvidia, said in an official blog post. "Placing an AI supercomputer on the desks of every data scientist, AI researcher, and student empowers them to engage and shape the age of AI." For those who love technical details, the GB10 chip represents a significant engineering achievement born from a collaboration with MediaTek. The system-on-chip combines Nvidia's latest GPU architecture with 20 power-efficient ARM cores connected via NVLink-C2C interconnect. Each DIGITS unit sports 128GB of unified memory and up to 4TB of NVMe storage. Again, for context, the most powerful GPUs to date pack around 24GB of VRAM (the memory required to run AI models) each, and the H100 Superchip starts at 80GB of VRAM. Companies are rushing to deploy AI agents, and Nvidia knows it, which is probably why it developed Nemotron, a new family of models that comes in three sizes, and announced its expansion today with two new models: Nvidia NIIM for video summarization and understanding and Nvidia Cosmos to give Nemotron vision capabilities -- the ability to understand visual instructions. Until now, the LLMs were only text-based. However, the models excelled at the following instruction: chat, function calls, coding, and math tasks. They're available through both Hugging Face and Nvidia's website, with enterprise access through the company's AI Enterprise software platform. Again, for context, In the LLM Arena, Nvidia's Llama Nemotron 70b ranks higher than the original Llama 405b developed by Meta. It also beats different versions of Claude, Gemini Advanced, Grok-2 mini and GPT-4o. Nvidia's agent push is now also related to infrastructure. The company announced partnerships with major agentic tech providers like LangChain, LlamaIndex, and CrewAI to build blueprints on Nvidia AI Enterprise. These ready-to-deploy templates tackle specific tasks that make it easier for developers to build highly specialized agents. A new PDF-to-podcast blueprint aims to compete with Google's NotebookLM, while another blueprint helps build video search and summary agents. Developers can test these blueprints through the new Nvidia Launchables platform, which enables one-click prototyping and deployment. Nvidia saved its gaming announcements for last, unveiling the much-expected GeForce RTX 5000 Series. The flagship RTX 5090 houses 92 billion transistors and delivers 3,352 trillion AI operations per second -- double the performance of the current RTX 4090. The entire lineup features fifth-generation Tensor Cores and fourth-generation RT Cores. The new cards introduce DLSS 4, which can boost frame rates up to 8x by using AI to generate multiple frames per render. Blackwell, the engine of AI, has arrived for PC gamers, developers and creatives," Jensen Huang said, "fusing AI-driven neural rendering and ray tracing, Blackwell is the most significant computer graphics innovation since we introduced programmable shading 25 years ago." The new cards also employ transformer models for super-resolution, promising highly realistic graphics and a lot more performance for their price -- which is not cheap, btw: $549 for the RTX 5070, with the 5070 Ti at $749, the 5080 at $999, and the 5090 at $1,999. If you don't have that kind of money and want to game, don't worry. AMD also announced today its Radeon RX 9070 series. The cards are built on the new RDNA 4 architecture using a 4nm manufacturing process and feature dedicated AI accelerators to compete with Nvidia's tensor cores. While full specifications remain under wraps, AMD's latest Ryzen AI chips already achieve 50 TOPS at peak performance. Sadly, Nvidia is still the king of AI applications thanks to its CUDA technology, Nvidia's proprietary AI architecture. To tackle this, AMD has secured partnerships with HP and Asus for system integration, and over 100 enterprise platform brands will use AMD Pro technology through 2025. The Radeon cards are expected to hit the market in Q1 2025, giving Nvidia an interesting battle in both gaming and AI acceleration.
[8]
Nvidia's secret weapon: It's the software, stupid
The big picture: Nvidia's Jensen Huang kickstarted CES this year with his keynote. We listened to it and read a fair amount of analysis since. If you don't have 90 minutes to spare, we can sum it up for you: Nvidia has all the software. Or at least it has more than you do. Want to build a robot? They have software for that. Design a factory? Check. Autonomous cars, drug discovery, video games - they have that too. And it is not just a basic application on offer; it has multiple layers - for designing a robot, modeling out its physical world interactions, and then putting it into production. Nvidia has software for all of those. Editor's Note: Guest author Jonathan Goldberg is the founder of D2D Advisory, a multi-functional consulting firm. Jonathan has developed growth strategies and alliances for companies in the mobile, networking, gaming, and software industries. This is not exactly news, we have written about this before, but the point of all this is to drive home the message that most companies will get a big leg up by starting with Nvidia's offerings. In fairness, we have no idea how successful any of these will be. We are fairly certain that Nvidia is not too clear on that either. Their superpower is the ability to take chances without fear of failure, and our suspicion is that many of this week's announcements will face mixed results. That being said, the sheer quantity and depth of their offerings should be sobering for everyone else. Put simply, for companies who do not plan to train their own foundational models, working with Nvidia's tools is going to be the easiest way to develop AI applications. Jensen showcased AI-powered simulations for autonomous driving at CES, highlighting Nvidia's cutting-edge Cosmos platform. This is especially true for other semiconductor companies. Here, Nvidia's lead is doubly formidable. First, competing with Nvidia in selling AI semis requires a massive investment in software, and probably half a decade to build it out. AMD is a year or two along that journey, and they are way ahead of number three. Broadcom does not have the software offerings, but they are going to do just fine selling to that handful of companies building their own foundational models. Everyone else has a long journey just to reach table stakes. The other item we pulled out of Huang's remarks was the level to which Nvidia is 'eating its own dog food.' They seem to be using AI tools to accelerate the development of their own chips. We think it is too soon to tell how much of the semiconductor design cycle can benefit from transformer-based AI models, but if even half of the workflow can be improved (dare we say 'accelerated') by AI, then Nvidia is going to have a meaningful productivity advantage over its competitors. Last year, Nvidia added $70 billion of revenue and $52 billion in operating profit, while only adding $6 billion in operating expenses. And now there is a risk that they are going to get even more productive?
[9]
Biggest Nvidia Takeaways From Jensen Huang's CES 2025 Keynote
LAS VEGAS (AP) -- Nvidia CEO Jensen Huang unveiled a suite of new products, services and partnerships at CES 2025. In a packed Las Vegas arena, Huang kicked off the CES this week with his vision for how his companies' products will drive gaming, robotics, personal computing and even self-driving vehicles forward. Here's a look at the biggest announcements to come out of his appearance. New graphics cards and AI chips Going back to its roots in gaming, the chipmaker and AI darling unveiled its GeForce RTX 50 Series desktop and laptop GPUs -- its consumer graphics processor units for gamers, creators and developers. Huang said the GPUs, which use the company's next-generation artificial intelligence chip Blackwell, can deliver breakthroughs in AI-driven rendering. "Blackwell, the engine of AI, has arrived for PC gamers, developers and creatives," Huang said, adding that Blackwell "is the most significant computer graphics innovation since we introduced programmable shading 25 years ago." Blackwell technology is now in full production, he said. The flagship RTX 5090 model will be available in January for $1,999. The RTX 5070 will launch later in February for $549 AI models to help with robotics and vehicles Huang also introduced a series of new AI models -- dubbed Cosmos -- that can generate cost-efficient photo-realistic video that can then be used to train robots and other automated services. The open-source model, which works with the Nvidia's Omniverse -- a physics simulation tool -- to create more realistic video, promises to be much cheaper than traditional forms of gathering training, such as having cars record road experiences or having people teach robots repetitive tasks. Central to this is Nvidia's new partnership with Japanese automaker Toyota to build its next-generation autonomous vehicles, and its announced partnership with Aurora to power its autonomous shipping trucks. Nvidia's DriveOS operating system would power the new cars, which Huang said has the highest standard of safety. "I predict that this will likely be the first multi-trillion dollar robotics industry." Aurora, based in Pittsburgh, plans to launch its driverless trucks -- with Nvidia's hardware -- commercially in April 2025. And a supercomputer on your desk And finally, Huang announced Project DIGITS, a $3,000 desktop computer targeted at developers or gen AI enthusiasts who want to experiment with AI models at home. The machine will launch in May and is powered by the new Blackwell chip. In all, Project DIGITS will allow users to run AI models with up to 200 billion parameters. This means models previously requiring expensive cloud infrastructure to operate can run on your desktop. Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
[10]
Biggest Nvidia takeaways from Jensen Huang's CES 2025 keynote
LAS VEGAS (AP) -- Nvidia CEO Jensen Huang unveiled a suite of new products, services and partnerships at CES 2025. In a packed Las Vegas arena, Huang kicked off the CES this week with his vision for how his companies' products will drive gaming, robotics, personal computing and even self-driving vehicles forward. Here's a look at the biggest announcements to come out of his appearance. New graphics cards and AI chips Going back to its roots in gaming, the chipmaker and AI darling unveiled its GeForce RTX 50 Series desktop and laptop GPUs -- its consumer graphics processor units for gamers, creators and developers. Huang said the GPUs, which use the company's next-generation artificial intelligence chip Blackwell, can deliver breakthroughs in AI-driven rendering. "Blackwell, the engine of AI, has arrived for PC gamers, developers and creatives," Huang said, adding that Blackwell "is the most significant computer graphics innovation since we introduced programmable shading 25 years ago." Blackwell technology is now in full production, he said. The flagship RTX 5090 model will be available in January for $1,999. The RTX 5070 will launch later in February for $549 AI models to help with robotics and vehicles Huang also introduced a series of new AI models -- dubbed Cosmos -- that can generate cost-efficient photo-realistic video that can then be used to train robots and other automated services. The open-source model, which works with the Nvidia's Omniverse -- a physics simulation tool -- to create more realistic video, promises to be much cheaper than traditional forms of gathering training, such as having cars record road experiences or having people teach robots repetitive tasks. Central to this is Nvidia's new partnership with Japanese automaker Toyota to build its next-generation autonomous vehicles, and its announced partnership with Aurora to power its autonomous shipping trucks. Nvidia's DriveOS operating system would power the new cars, which Huang said has the highest standard of safety. "I predict that this will likely be the first multi-trillion dollar robotics industry." Aurora, based in Pittsburgh, plans to launch its driverless trucks -- with Nvidia's hardware -- commercially in April 2025. And a supercomputer on your desk And finally, Huang announced Project DIGITS, a $3,000 desktop computer targeted at developers or gen AI enthusiasts who want to experiment with AI models at home. The machine will launch in May and is powered by the new Blackwell chip. In all, Project DIGITS will allow users to run AI models with up to 200 billion parameters. This means models previously requiring expensive cloud infrastructure to operate can run on your desktop.
[11]
Biggest Nvidia takeaways from Jensen Huang's CES 2025 keynote
LAS VEGAS (AP) -- Nvidia CEO Jensen Huang unveiled a suite of new products, services and partnerships at CES 2025. In a packed Las Vegas arena, Huang kicked off the CES this week with his vision for how his companies' products will drive gaming, robotics, personal computing and even self-driving vehicles forward. Here's a look at the biggest announcements to come out of his appearance. Going back to its roots in gaming, the chipmaker and AI darling unveiled its GeForce RTX 50 Series desktop and laptop GPUs -- its consumer graphics processor units for gamers, creators and developers. Huang said the GPUs, which use the company's next-generation artificial intelligence chip Blackwell, can deliver breakthroughs in AI-driven rendering. "Blackwell, the engine of AI, has arrived for PC gamers, developers and creatives," Huang said, adding that Blackwell "is the most significant computer graphics innovation since we introduced programmable shading 25 years ago." Blackwell technology is now in full production, he said. The flagship RTX 5090 model will be available in January for $1,999. The RTX 5070 will launch later in February for $549 Huang also introduced a series of new AI models -- dubbed Cosmos -- that can generate cost-efficient photo-realistic video that can then be used to train robots and other automated services. The open-source model, which works with the Nvidia's Omniverse -- a physics simulation tool -- to create more realistic video, promises to be much cheaper than traditional forms of gathering training, such as having cars record road experiences or having people teach robots repetitive tasks. Central to this is Nvidia's new partnership with Japanese automaker Toyota to build its next-generation autonomous vehicles, and its announced partnership with Aurora to power its autonomous shipping trucks. Nvidia's DriveOS operating system would power the new cars, which Huang said has the highest standard of safety. "I predict that this will likely be the first multi-trillion dollar robotics industry." Aurora, based in Pittsburgh, plans to launch its driverless trucks -- with Nvidia's hardware -- commercially in April 2025. And finally, Huang announced Project DIGITS, a $3,000 desktop computer targeted at developers or gen AI enthusiasts who want to experiment with AI models at home. The machine will launch in May and is powered by the new Blackwell chip. In all, Project DIGITS will allow users to run AI models with up to 200 billion parameters. This means models previously requiring expensive cloud infrastructure to operate can run on your desktop.
[12]
Nvidia founder Jensen Huang unveils next generation of AI and gaming chips at CES 2025
In a packed Las Vegas arena, Nvidia founder Jensen Huang stood on stage and marveled over the crisp real-time computer graphics displayed on the screen behind him. He watched as a dark-haired woman walked through ornate gilded double doors and took in the rays of light that poured in through stained glass windows. "The amount of geometry that you saw was absolutely insane," Huang told an audience of thousands at CES 2025 Monday night. "It would have been impossible without artificial intelligence." The chipmaker and AI darling unveiled its GeForce RTX 50 Series desktop and laptop GPUs -- its most advanced consumer graphics processor units for gamers, creators and developers. The tech is designed for use on both desktop and laptop computers. Ahead of Huang's speech, Nvidia stock climbed 3.4% to top its record set in November. Nvidia and other AI stocks keep climbing even as criticism rises that their stock prices have already shot too high, too fast. Despite worries about a potential bubble, the industry continues to talk up its potential. Huang said the GPUs, which use the company's next-generation artificial intelligence chip Blackwell, can deliver breakthroughs in AI-driven rendering. "Blackwell, the engine of AI, has arrived for PC gamers, developers and creatives," Huang said, adding that Blackwell "is the most significant computer graphics innovation since we introduced programmable shading 25 years ago." Blackwell technology is now in full production, he said. Building on the tech Nvidia released 25 years ago, the company announced that it would also introduce "RTX Neural Shaders," which use AI to help render game characters in deep detail -- a task that's notoriously tricky because people can easily spot a small error on digital humans. Huang said Nvidia is also introducing a new suite of technologies that enable "autonomous characters" to perceive, plan and act like human players. Those characters can help players plan strategies or adapt tactics to challenge players and create more dynamic battles. In addition to Nvidia, tech giants such as AMD, Google and Samsung are at CES 2025 to unveil artificial intelligence tools aimed at helping both content creators and consumers alike in their quest for entertainment.
[13]
Nvidia Hands-Down Won AI At CES 2025, And Also The Show Itself. Here's Why That Matters
There were plenty of AI announcements at CES 2025 and while some were charming -- like the robot vacuum that can pick up dirty socks or TVs that can generate recipes -- none of them has the power to transform society like Nvidia. Calling Nvidia the chipmaker powering the AI revolution is no overstatement, which is why its Cosmos AI model won the official Best of CES award not only for AI, but for the entire show. CNET Group, which is made up of CNET, PCMag, ZDNET, Mashable and Lifehacker, is the official awards partner for CTA, which puts on the annual mega tech show. Back to the robot for a moment, if Roborock and Dreame are making home tech cleaning helpers that use AI to identify clutter that doesn't belong and ledges to "climb" over with their respective robot arms and legs, then Nvidia is the engine that is openly releasing models to allow robots like these to function in the real world. Not solely robots, either. Also smartglasses to process speech and images in the surrounding world and cars -- Nvidia and Toyota have already inked a deal to use the Cosmos AI model to train cars. (The company also released powerful new graphics cards.) As ZDNET Editor-in-Chief Jason Hiner put it, "Nvidia Cosmos demonstrates the biggest and boldest ambition we've seen at CES 2025 for how technology could help people and communities in the years ahead." Earlier this week, Nvidia CEO Jensen Huang took the stage to unveil Cosmos, the foundational AI model that helps robots and autonomous vehicles understand the physical world, calling it "the ChatGPT moment for robotics." Huang also announced a new chip named Thor for cars and trucks that uses AI to process visual information coming in from cameras and lidar sensors to lead the way in level 4 autonomous driving. And he revealed the 50-series lineup of gaming and laptop GPUs that promises to deliver massive leaps in performance and "breakthroughs in AI-driven rendering" at a lower cost than the 40 series (in most cases). CES is the largest consumer technology show in North America. Held annually in Las Vegas, It brings in the world's top tech makers to show off devices and concepts that may or may not ever reach consumers. It's also a way for smaller companies to get in front of the press and fans to demo what they've been working on. AI was the major trend last year. When OpenAI launched ChatGPT in late 2022, it showed general consumers what generative AI was capable of. What followed were all the major tech companies releasing AI products of their own and seeing stock valuations jump in the process. Now, seemingly every company is wanting to integrate AI into its products in some fashion as a way to court investor and consumer interest, even as consumers shrug at AI-powered iPhones. Amidst the miasma of AI goop flowing through the showrooms in Las Vegas, the iterative remixes of existing AI tech can sometimes end up having a snake-oil-like quality. But people chose to stand in long lines to see Huang, who has achieved tech celebrity status in his own right. His quirky announcement videos attract millions of views and his down-to-earth demeanor, plus his adornment of leather jackets, make him a likable hawker of cutting-edge gaming graphics. And it's paying off for Nvidia. In early January of 2023, Nvidia stock hovered around $15. With the AI revolution, companies have been scrambling to buy Nvidia chips over the past two years to power their servers. After Huang's keynote on Monday, the stock hit record highs above $150 before cooling off a bit, but still representing about a tenfold increase in just two years. It's also worth noting how Nvidia's tech has seemingly pushed aside other players in the GPU space. Nvidia is so heavily dominating the GPU market that AMD and Intel have been relegated to competing in the midrange category. AMD did announce a series of new midrange GPUs, but changed the naming convention to better match Nvidia. For example, the AMD RX 9070 is taking clear shots at the 5070 cards, making it easier for consumers to compare the two. Intel just recently entered the dedicated GPU card market after failing to meet the moment on the CPU space against Qualcomm, AMD and Nvidia. But it's only trying to carve out a space in the budget GPU category. Thankfully, this past year has shown that AI hype can only go so far. AI wearables failed to impress and the market cooled on throwing billions at companies releasing AI-polka-dotted press releases. Next year's CES will likely have its fair share of AI bloat, most of which will likely be met with yawns -- maybe it should be renamed as Nvidia Greenlight.
[14]
Nvidia founder Jensen Huang unveils next generation of AI and gaming chips at CES 2025
In a packed Las Vegas arena, Nvidia founder Jensen Huang stood on stage and marveled over the crisp real-time computer graphics displayed on the screen behind him. He watched as a dark-haired woman walked through ornate gilded double doors and took in the rays of light that poured in through stained glass windows. "The amount of geometry that you saw was absolutely insane," Huang told an audience of thousands at CES 2025 Monday night. "It would have been impossible without artificial intelligence." The chipmaker and AI darling unveiled its GeForce RTX 50 Series desktop and laptop GPUs -- its most advanced consumer graphics processor units for gamers, creators and developers. The tech is designed for use on both desktop and laptop computers. Ahead of Huang's speech, Nvidia stock climbed 3.4% to top its record set in November. Nvidia and other AI stocks keep climbing despite concerns over Nvidia's share price being bloated and the company being overvalued. Huang said the GPUs, which use the company's next-generation artificial intelligence chip Blackwell, can deliver breakthroughs in AI-driven rendering. "Blackwell, the engine of AI, has arrived for PC gamers, developers and creatives," Huang said, adding that Blackwell "is the most significant computer graphics innovation since we introduced programmable shading 25 years ago." Blackwell technology is now in full production, he said. Building on the tech Nvidia released 25 years ago, the company announced that it would also introduce "RTX Neural Shaders," which use AI to help render game characters in deep detail - a task that's notoriously tricky because people can easily spot a small error on digital humans. Huang said Nvidia is also introducing a new suite of technologies that enable "autonomous characters" to perceive, plan and act like human players. Those characters can help players plan strategies or adapt tactics to challenge players and create more dynamic battles. In addition to Nvidia, tech giants such as AMD, Google and Samsung are at CES 2025 to unveil artificial intelligence tools aimed at helping both content creators and consumers alike in their quest for entertainment. © 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.
[15]
CES 2025: AI Advancing at 'Incredible Pace,' NVIDIA CEO Says
Jensen Huang unveils NVIDIA Cosmos, Blackwell RTX 50 Series GPUs, and AI tools for PCs NVIDIA founder and CEO Jensen Huang kicked off CES 2025 with a 90-minute keynote that included new products to advance gaming, autonomous vehicles, robotics, and agentic AI. AI has been "advancing at an incredible pace," he said before an audience of more than 6,000 packed into the Michelob Ultra Arena in Las Vegas. "It started with perception AI -- understanding images, words, and sounds. Then generative AI -- creating text, images and sound," Huang said. Now, we're entering the era of "physical AI, AI that can proceed, reason, plan and act." NVIDIA GPUs and platforms are at the heart of this transformation, Huang explained, enabling breakthroughs across industries, including gaming, robotics and autonomous vehicles (AVs). Huang's keynote showcased how NVIDIA's latest innovations are enabling this new era of AI, with several groundbreaking announcements, including: Huang started off his talk by reflecting on NVIDIA's three-decade journey. In 1999, NVIDIA invented the programmable GPU. Since then, modern AI has fundamentally changed how computing works, he said. "Every single layer of the technology stack has been transformed, an incredible transformation, in just 12 years." Revolutionizing Graphics With GeForce RTX 50 Series "GeForce enabled AI to reach the masses, and now AI is coming home to GeForce," Huang said. With that, he introduced the NVIDIA GeForce RTX 5090 GPU, the most powerful GeForce RTX GPU so far, with 92 billion transistors and delivering 3,352 trillion AI operations per second (TOPS). "Here it is -- our brand-new GeForce RTX 50 series, Blackwell architecture," Huang said, holding the blacked-out GPU aloft and noting how it's able to harness advanced AI to enable breakthrough graphics. "The GPU is just a beast." "Even the mechanical design is a miracle," Huang said, noting that the graphics card has two cooling fans. More variations in the GPU series are coming. The GeForce RTX 5090 and GeForce RTX 5080 desktop GPUs are scheduled to be available Jan. 30. The GeForce RTX 5070 Ti and the GeForce RTX 5070 desktops are slated to be available starting in February. Laptop GPUs are expected in March. DLSS 4 introduces Multi Frame Generation, working in unison with the complete suite of DLSS technologies to boost performance by up to 8x. NVIDIA also unveiled NVIDIA Reflex 2, which can reduce PC latency by up to 75%. The latest generation of DLSS can generate three additional frames for every frame we calculate, Huang explained. "As a result, we're able to render at incredibly high performance, because AI does a lot less computation." RTX Neural Shaders use small neural networks to improve textures, materials and lighting in real-time gameplay. RTX Neural Faces and RTX Hair advance real-time face and hair rendering, using generative AI to animate the most realistic digital characters ever. RTX Mega Geometry increases the number of ray-traced triangles by up to 100x, providing more detail. Advancing Physical AI With Cosmos| In addition to advancements in graphics, Huang introduced the NVIDIA Cosmos world foundation model platform, describing it as a game-changer for robotics and industrial AI. The next frontier of AI is physical AI, Huang explained. He likened this moment to the transformative impact of large language models on generative AI. "The ChatGPT moment for general robotics is just around the corner," he explained. Like large language models, world foundation models are fundamental to advancing robot and AV development, yet not all developers have the expertise and resources to train their own, Huang said. Cosmos integrates generative models, tokenizers, and a video processing pipeline to power physical AI systems like AVs and robots. Cosmos aims to bring the power of foresight and multiverse simulation to AI models, enabling them to simulate every possible future and select optimal actions. Cosmos models ingest text, image or video prompts and generate virtual world states as videos, Huang explained. "Cosmos generations prioritize the unique requirements of AV and robotics use cases like real-world environments, lighting and object permanence." Leading robotics and automotive companies, including 1X, Agile Robots, Agility, Figure AI, Foretellix, Fourier, Galbot, Hillbot, IntBot, Neura Robotics, Skild AI, Virtual Incision, Waabi and XPENG, along with ridesharing giant Uber, are among the first to adopt Cosmos. In addition, Hyundai Motor Group is adopting NVIDIA AI and Omniverse to create safer, smarter vehicles, supercharge manufacturing and deploy cutting-edge robotics. Cosmos is open license and available on GitHub. Empowering Developers With AI Foundation Models Beyond robotics and autonomous vehicles, NVIDIA is empowering developers and creators with AI foundation models. Huang introduced AI foundation models for RTX PCs that supercharge digital humans, content creation, productivity and development. "These AI models run in every single cloud because NVIDIA GPUs are now available in every single cloud," Huang said. "It's available in every single OEM, so you could literally take these models, integrate them into your software packages, create AI agents and deploy them wherever the customers want to run the software." These models -- offered as NVIDIA NIM microservices -- are accelerated by the new GeForce RTX 50 Series GPUs. The GPUs have what it takes to run these swiftly, adding support for FP4 computing, boosting AI inference by up to 2x and enabling generative AI models to run locally in a smaller memory footprint compared with previous-generation hardware. Huang explained the potential of new tools for creators: "We're creating a whole bunch of blueprints that our ecosystem could take advantage of. All of this is completely open source, so you could take it and modify the blueprints." Top PC manufacturers and system builders are launching NIM-ready RTX AI PCs with GeForce RTX 50 Series GPUs. "AI PCs are coming to a home near you," Huang said. While these tools bring AI capabilities to personal computing, NVIDIA is also advancing AI-driven solutions in the automotive industry, where safety and intelligence are paramount. Innovations in Autonomous Vehicles Huang announced the NVIDIA DRIVE Hyperion AV platform, built on the new NVIDIA AGX Thor system-on-a-chip (SoC), designed for generative AI models and delivering advanced functional safety and autonomous driving capabilities. "The autonomous vehicle revolution is here," Huang said. "Building autonomous vehicles, like all robots, requires three computers: NVIDIA DGX to train AI models, Omniverse to test drive and generate synthetic data, and DRIVE AGX, a supercomputer in the car." DRIVE Hyperion, the first end-to-end AV platform, integrates advanced SoCs, sensors, and safety systems for next-gen vehicles, a sensor suite and an active safety and level 2 driving stack, with adoption by automotive safety pioneers such as Mercedes-Benz, JLR and Volvo Cars. Huang highlighted the critical role of synthetic data in advancing autonomous vehicles. Real-world data is limited, so synthetic data is essential for training the autonomous vehicle data factory, he explained. Powered by NVIDIA Omniverse AI models and Cosmos, this approach "generates synthetic driving scenarios that enhance training data by orders of magnitude." Using Omniverse and Cosmos, NVIDIA's AI data factory can scale "hundreds of drives into billions of effective miles," Huang said, dramatically increasing the datasets needed for safe and advanced autonomous driving. "We are going to have mountains of training data for autonomous vehicles," he added. Toyota, the world's largest automaker, will build its next-generation vehicles on the NVIDIA DRIVE AGX Orin, running the safety-certified NVIDIA DriveOS operating system, Huang said. "Just as computer graphics was revolutionized at such an incredible pace, you're going to see the pace of AV development increasing tremendously over the next several years," Huang said. These vehicles will offer functionally safe, advanced driving assistance capabilities. Agentic AI and Digital Manufacturing NVIDIA and its partners have launched AI Blueprints for agentic AI, including PDF-to-podcast for efficient research and video search and summarization for analyzing large quantities of video and images -- enabling developers to build, test and run AI agents anywhere. AI Blueprints empower developers to deploy custom agents for automating enterprise workflows This new category of partner blueprints integrates NVIDIA AI Enterprise software, including NVIDIA NIM microservices and NVIDIA NeMo, with platforms from leading providers like CrewAI, Daily, LangChain, LlamaIndex and Weights & Biases. Additionally, Huang announced new Llama Nemotron. Developers can use NVIDIA NIM microservices to build AI agents for tasks like customer support, fraud detection, and supply chain optimization. Available as NVIDIA NIM microservices, the models can supercharge AI agents on any accelerated system. NVIDIA NIM microservices streamline video content management, boosting efficiency and audience engagement in the media industry. Moving beyond digital applications, NVIDIA's innovations are paving the way for AI to revolutionize the physical world with robotics. "All of the enabling technologies that I've been talking about are going to make it possible for us in the next several years to see very rapid breakthroughs, surprising breakthroughs, in general robotics." In manufacturing, the NVIDIA Isaac GR00T Blueprint for synthetic motion generation will help developers generate exponentially large synthetic motion data to train their humanoids using imitation learning. Huang emphasized the importance of training robots efficiently, using NVIDIA's Omniverse to generate millions of synthetic motions for humanoid training. The Mega blueprint enables large-scale simulation of robot fleets, adopted by leaders like Accenture and KION for warehouse automation. These AI tools set the stage for NVIDIA's latest innovation: a personal AI supercomputer called Project DIGITS. NVIDIA Unveils Project Digits Putting NVIDIA Grace Blackwell on every desk and at every AI developer's fingertips, Huang unveiled NVIDIA Project DIGITS. "I have one more thing that I want to show you," Huang said. "None of this would be possible if not for this incredible project that we started about a decade ago. Inside the company, it was called Project DIGITS -- deep learning GPU intelligence training system." Huang highlighted the legacy of NVIDIA's AI supercomputing journey, telling the story of how in 2016 he delivered the first NVIDIA DGX system to OpenAI. "And obviously, it revolutionized artificial intelligence computing." The new Project DIGITS takes this mission further. "Every software engineer, every engineer, every creative artist -- everybody who uses computers today as a tool -- will need an AI supercomputer," Huang said. Huang revealed that Project DIGITS, powered by the GB10 Grace Blackwell Superchip, represents NVIDIA's smallest yet most powerful AI supercomputer. "This is NVIDIA's latest AI supercomputer," Huang said, showcasing the device. "It runs the entire NVIDIA AI stack -- all of NVIDIA software runs on this. DGX Cloud runs on this." The compact yet powerful Project DIGITS is expected to be available in May. A Year of Breakthroughs "It's been an incredible year," Huang said as he wrapped up the keynote. Huang highlighted NVIDIA's major achievements: Blackwell systems, physical AI foundation models, and breakthroughs in agentic AI and robotics "I want to thank all of you for your partnership," Huang said.
[16]
Roundup: Nvidia's impressive list of new and upgraded products at CES - SiliconANGLE
Roundup: Nvidia's impressive list of new and upgraded products at CES Although the holiday gift-giving season may be over, Nvidia Corp. co-founder and Chief Executive Jensen Huang was in a very generous mood during his Monday keynote address at the CES consumer electronics show in Las Vegas. The leader in accelerated computing, which invented the graphics processing unit more than 25 years ago, still has an insatiable appetite for innovation. Huang (pictured), dressed in a more Vegas version of his customary black leather jacket, kicked off this keynote with a history lesson on how Nvidia went from a company that made video games better to the AI powerhouse it is today. He then shifted into product mode and showcased his company's continuing leadership in the AI revolution by announcing several new and enhanced products for AI-based robotics, autonomous vehicles, agentic AI and more. Here are the five I felt were most meaningful: Nvidia's Cosmos platform consists of what the company calls "state-of-the-art generative world foundation models, advanced tokenizers, guardrails and an accelerated video processing pipeline" for advancing the development of physical AI capabilities, including autonomous vehicles and robots. Using Nvidia's world foundation models or WFMs, Cosmos makes it easy for organizations to produce vast amounts of "photoreal, physics-based synthetic data" for training and evaluating their existing models. Developers can also fine-tune Cosmos WFMs to build custom models. Physical AI can be very expensive to implement, requiring robots, cars and other systems to be built and trained in real-life scenarios. Cars crash and robots fall, adding cost and time to the process. With Cosmos, everything can simulated virtually, and when the training is complete, the information is uploaded into the physical device. Nvidia is providing Cosmos models under an open model license to help the robotics and AV community work faster and more effectively. Many of the world's leading physical AI companies use Cosmos to accelerate their work. Huang also announced new generative AI models and blueprints that expand and further integrate Nvidia Omniverse into physical AI applications. The company said leading software development and professional services firms are leveraging Omniverse to drive the growth of new products and services designed to "accelerate the next era of industrial AI." Companies such as Accenture, Microsoft and Siemens are integrating Omniverse into their next-generation software products and professional services. Siemens announced at CES the availability of Teamcenter Digital Reality Viewer, its first Xcelerator application powered by Nvidia's Omniverse libraries. Nvidia debuted four new blueprints for developers to use in building Universal Scene Description (OpenUSD)-based Omniverse digital twins for physical AI. The new blueprints are: Nvidia announced the GeForce RTX 50 series of desktop and laptop graphics processing units. The RTX 50 series is powered by Nvidia's Blackwell architecture and the latest Tensor Cores and RT Cores. Huang said it delivers breakthroughs in AI-driven rendering. "Blackwell, the engine of AI, has arrived for PC gamers, developers and creatives," he said. "Fusing AI-driven neural rendering and ray tracing, Blackwell is the most significant computer graphics innovation since we introduced programmable shading 25 years ago." The pricing of the new systems gave rise to a loud cheer from the crowd. The previous generation GPU, RTX 4090, retailed for $1,599. The low end of the 50 series, the RTX 5070, which offers comparable performance (1,000 trillion AI operations per second) to the RTX 4090, is available for the low price of $549. The RTX 5070 Ti, 1,400 AI TOPS is $749, the RTX 5080 (1,800 AI TOPS) sells for $999, and the RTX 5090, which offers a whopping 3,400 AI TOPS, is $1,999. The company also announced a family of laptops where the massive RTX processor has been shrunk down and put into a small form factor. Huang explained that Nvidia used AI to accomplish this, as it generates most of the pixels using Tensor Cores. This means only the required pixels are raytraced, and AI is used to develop all the other pixels, creating a significantly more energy-efficient system. "The future of computer graphics is neural rendering, which fuses AI with traditional graphics," Huang explained. Laptop pricing ranges from $1,299 for the RTX 5070 model to $2,899 for the RTX 5090. Huang introduced a small desktop computer system called Project DIGITS powered by Nvidia's new GB10 Grace Blackwell Superchip. The system is small but powerful. It will provide a petaflop of AI performance with 120 gigabytes of coherent, unified memory. The company said it will enable developers to work with AI models of up to 200 billion parameters at their desks. The system is designed for AI developers, researchers, data scientists and students working with AI workloads. Nvidia envisions key workloads for the new computer, including AI model experimentation and prototyping. Rev Labaredian, vice president of Omniverse and simulation technology at Nvidia, told analysts in a briefing before Huang's keynote that the massive shift in computing now occurring represents software 2.0, which is machine learning AI that is "basically software writing software." To meet this need, Nvidia is introducing new products to enable agentic AI, including the Llama Nemotron family of open large language models. The models can help developers create and deploy AI agents across various applications -- including customer support, fraud detection, and product supply chain and inventory management optimization. Huang explained that the Llama models could be "better fine-tuned for enterprise use," so Nvidia used its expertise to create the Llama Nemotron suite of open models. There are currently three models: Nano is small and low latency with fast response times for PCs and edge devices, Super is balanced for accuracy and computer efficiency, and Ultra is the highest-accuracy model for data center-scale applications. If it's not clear by now, the AI era has arrived. Many industry watchers believe AI is currently overhyped, but I think the opposite. AI will eventually be embedded into every application, device and system we use. The internet has changed how we work, live and learn, and AI will have the same impact. Huang did an excellent job of explaining the relevance of AI to all of us today and what an AI-infused world will look like. It was a great way to kick off CES 2025.
[17]
I am thrilled by Nvidia's cute petaflop mini PC wonder, and it's time for Jensen's law: it takes 100 months to get equal AI performance for 1/25th of the cost
Nvidia's desktop super computer is probably the greatest revolution in tech hardware since the IBM PC Nobody really expected Nvidia to release something like the GB10. After all, why would a tech company that transformed itself into the most valuable firm ever by selling parts that cost hundreds of thousands of dollars, suddenly decide to sell an entire system for a fraction of the price? I believe that Nvidia wants to revolutionize computing the way IBM did it almost 45 years ago with the original IBM PC. Project DIGITS, as a reminder, is a fully formed, off-the-shelf super computer built into something the size of a mini PC. It is essentially a smaller version of the DGX-1, the first of its kind launched almost a decade ago, back in April 2016. Then, it sold for $129,000 with a 16-core Intel Xeon CPU and eight P100 GPGPU cards; Digits costs $3,000. Nvidia confirmed it has an AI performance of 1,000 Teraflops at FP4 precision (dense/sparse?). Although there's no direct comparison, one can estimate that the diminutive super computer has roughly half the processing power of a fully loaded 8-card Pascal-based DGX-1. At the heart of Digits is the GB10 SoC, which has 20 Arm Cores (10 Arm Cortex-X925 and 10 Cortex-A725). Other than the confirmed presence of a Blackwell GPU (a lite version of the B100), one can only infer the power consumption (100W) and the bandwidth (825GB/s according to The Register). You should be able to connect two of these devices (but not more) via Nvidia's proprietary ConnectX technology to tackle larger LLMs such as Meta's Llama 3.1 405B. Shoving these tiny mini PCs in a 42U rack seems to be a near impossibility for now as it would encroach on Nvidia's far more lucrative DGX GB200 systems. Why did Nvidia embark on Project DIGITS? I think it is all about reinforcing its moat. Making your products so sticky that it becomes near impossible to move to the competition is something that worked very well for others: Microsoft and Windows, Google and Gmail, Apple and the iPhone. The same happened with Nvidia and CUDA - being in the driving seat allowed Nvidia to do things such as shuffling the goal posts and wrongfooting the competition. The move to FP4 for inference allowed Nvidia to deliver impressive benchmark claims such as "Blackwell delivers 2.5x its predecessor's performance in FP8 for training, per chip, and 5x with FP4 for inference". Of course, AMD doesn't offer FP4 computation in the MI300X/325X series and we will have to wait till later this year for it to roll out in the Instinct MI350X/355X. Nvidia is therefore laying the ground for future incursions, for lack of a better word or analogy, from existing and future competitors, including its own customers (think Microsoft and Google). Nvidia CEO Jensen Huang's ambition is clear; he wants to expand the company's domination beyond the realm of the hyperscalers. "AI will be mainstream in every application for every industry. With Project DIGITS, the Grace Blackwell Superchip comes to millions of developers, placing an AI supercomputer on the desks of every data scientist, AI researcher and student empowers them to engage and shape the age of AI," Huang recently commented. Short of renaming Nvidia as Nvid-ai, this is as close as it gets to Huang acknowledging his ambitions to make his company's name synonymous with AI, just like Tarmac and Hoover before them (albeit in more niche verticals). I was also, like many, perplexed by the Mediatek link and the rationale for this tie-up can be found in the Mediatek press release. The Taiwanese company "brings its design expertise in Arm-based SoC performance and power efficiency to [a] groundbreaking device for AI researchers and developers" it noted. The partnership, I believe, benefits Mediatek more than Nvidia and in the short run, I can see Nvidia quietly going solo. Reuters reported Huang dismissed the idea of Nvidia going after AMD and Intel, saying, "Now they [Mediatek] could provide that to us, and they could keep that for themselves and serve the market. And so it was a great win-win". This doesn't mean Nvidia will not deliver more mainstream products though, just they would be aimed at businesses and professionals, not consumers where cut throat competition makes things more challenging (and margins wafer thin). Reuters article quotes Huang saying, "We're going to make that a mainstream product, we'll support it with all the things that we do to support professional and high-quality software, and the PC (manufacturers) will make it available to end users." One theory I came across while researching this feature is that more data scientists are embracing Apple's Mac platform because it offers a balanced approach. Good enough performance - thanks to its unified memory architecture - at a 'reasonable' price. The Mac Studio with 128GB unified memory and 4TB SSD currently retails for $5,799. So where does Nvidia go from there? An obvious move would be to integrate the memory on the SoC, similar to what Apple has done with its M series SoC (and AMD with its HBM-fuelled Epyc). This would not only save on costs but would improve performance, something that its bigger sibling, the GB200 already does. Then it will depend on whether Nvidia wants to offer more at the same price or the same performance at a lower price point (or a bit of both). Nvidia could go Intel's way and use the GB10 as a prototype to encourage other key partners (PNY, Gigabyte, Asus) to launch similar projects (Intel did that with the Next Unit of Computing or NUC). I am also particularly interested to know what will happen to the Jetson Orin family; the NX 16GB version was upgraded just a few weeks ago to offer 157 TOPS in INT8 performance. This platform is destined to fulfill more DIY/edge use cases rather than pure training/inference tasks but I can't help but think about "What If" scenarios. Nvidia is clearly disrupting itself before others attempt to do so; the question is how far will it go.
[18]
Nvidia founder unveils new technology for gamers and creators at CES 2025
LAS VEGAS -- In a packed Las Vegas arena, Nvidia founder Jensen Huang stood on stage and marveled over the crisp real-time computer graphics displayed on the screen behind him. He watched as a dark-haired woman walked through ornate gilded double doors and took in the rays of light that poured in through stained glass windows. "The amount of geometry that you saw was absolutely insane," Huang told an audience of thousands at CES 2025 Monday night. "It would have been impossible without artificial intelligence." The chipmaker and AI darling unveiled its GeForce RTX 50 Series desktop and laptop GPUs -- its most advanced consumer graphics processor units for gamers, creators and developers. The tech is designed for use on both desktop and laptop computers. Ahead of Huang's speech, Nvidia stock climbed 3.4% to top its record set in November. Nvidia and other AI stocks keep climbing even as criticism rises that their stock prices have already shot too high, too fast. Despite worries about a potential bubble, the industry continues to talk up its potential. Huang said the GPUs, which use the company's next-generation artificial intelligence chip Blackwell, can deliver breakthroughs in AI-driven rendering. "Blackwell, the engine of AI, has arrived for PC gamers, developers and creatives," Huang said, adding that Blackwell "is the most significant computer graphics innovation since we introduced programmable shading 25 years ago." Blackwell technology is now in full production, he said. Building on the tech Nvidia released 25 years ago, the company announced that it would also introduce "RTX Neural Shaders," which use AI to help render game characters in deep detail - a task that's notoriously tricky because people can easily spot a small error on digital humans. Huang said Nvidia is also introducing a new suite of technologies that enable "autonomous characters" to perceive, plan and act like human players. Those characters can help players plan strategies or adapt tactics to challenge players and create more dynamic battles. In addition to Nvidia, tech giants such as AMD, Google and Samsung are at CES 2025 to unveil artificial intelligence tools aimed at helping both content creators and consumers alike in their quest for entertainment.
[19]
Nvidia founder Jensen Huang unveils new technology for gamers and creators at CES 2025
LAS VEGAS (AP) -- In a packed Las Vegas arena, Nvidia founder Jensen Huang stood on stage and marveled over the crisp real-time computer graphics displayed on the screen behind him. He watched as a dark-haired woman walked through ornate gilded double doors and took in the rays of light that poured in through stained glass windows. "The amount of geometry that you saw was absolutely insane," Huang told an audience of thousands at CES 2025 Monday night. "It would have been impossible without artificial intelligence." The chipmaker and AI darling unveiled its GeForce RTX 50 Series desktop and laptop GPUs -- its most advanced consumer graphics processor units for gamers, creators and developers. The tech is designed for use on both desktop and laptop computers. Ahead of Huang's speech, Nvidia stock climbed 3.4% to top its record set in November. Nvidia and other AI stocks keep climbing even as criticism rises that their stock prices have already shot too high, too fast. Despite worries about a potential bubble, the industry continues to talk up its potential. Huang said the GPUs, which use the company's next-generation artificial intelligence chip Blackwell, can deliver breakthroughs in AI-driven rendering. "Blackwell, the engine of AI, has arrived for PC gamers, developers and creatives," Huang said, adding that Blackwell "is the most significant computer graphics innovation since we introduced programmable shading 25 years ago." Blackwell technology is now in full production, he said. Building on the tech Nvidia released 25 years ago, the company announced that it would also introduce "RTX Neural Shaders," which use AI to help render game characters in deep detail - a task that's notoriously tricky because people can easily spot a small error on digital humans. Huang said Nvidia is also introducing a new suite of technologies that enable "autonomous characters" to perceive, plan and act like human players. Those characters can help players plan strategies or adapt tactics to challenge players and create more dynamic battles. In addition to Nvidia, tech giants such as AMD, Google and Samsung are at CES 2025 to unveil artificial intelligence tools aimed at helping both content creators and consumers alike in their quest for entertainment.
[20]
Nvidia CEO Unveils Advanced AI Chips, Foundation Models and Mini Supercomputer | PYMNTS.com
Nvidia CEO Jensen Huang on Monday (Dec. 6) unveiled next-generation chips, new large language models, a mini artificial intelligence (AI) supercomputer, and a partnership with Toyota as the world's second-most valuable company continues to aggressively expand its business. "It's been an extraordinary journey, extraordinary year," Huang said to a packed crowd during a keynote address Monday at the CES trade show, where he wore a shiny version of his trademark leather jacket to fit in with the Las Vegas venue. This year's CES event continues this week. Shares of Nvidia closed at a record high of $149.43 hours ahead of his speech. Chip stocks got a lift after Foxconn, an Apple supplier that assembles AI servers for tech company clients, reported record fourth-quarter revenue, signaling continued strength in AI demand. Huang confirmed that AI demand remains robust, saying: "Blackwell is in full production." Blackwell is Nvidia's latest hardware architecture, which defines the physical and structural layout of chip components. A few months ago, Blackwell chip shipments were delayed due to technical issues. Those issues were not in display as Huang unveiled the chipmaker's most advanced gaming chips: the GeForce RTX 50 Series for gamers, creators and developers. It uses the Blackwell architecture and fuses AI-driven neural rendering and ray tracing, both techniques to enhance computer graphics. The RTX 50 Series can run generative AI models up to twice as fast and uses up less memory than the prior generation of chips. Nvidia's Blackwell chips can process trillion-parameter large language models at up to 25 times lower cost and energy consumption than its predecessor, the company has said. The chip could drive a new demand cycle as it improves upon the current Hopper and Ampere architectures to power a new generation of GPUs for intensive workloads in AI, machine learning and high-performance computing. Blackwell competes against AMD's MI300 chips series, Intel's AI accelerators, Google's TPUs and others. Another new product from Nvidia announced Monday is Project DIGITS, a mini AI supercomputer that contains the new Nvidia GB10 Grace Blackwell Superchip. At a petaflop of AI computing speed -- performing a quadrillion calculations per second -- the chip can be used for prototyping, fine-tuning and running large AI models. Users can develop and run inference -- applying new data to a trained AI model -- on their desktop before deployment. The superchip consists of an Nvidia Blackwell GPU that connects to an Nvidia Grace CPU via NVLink-C2C (chip-to-chip), an interconnection technology that enables fast communication between chips in multi-chip systems. Huang said that with the advent of agentic AI -- where AI agents or 'bots' work with other bots in the background to perform automated tasks -- more computing power will be needed as the AI workload has increased. "In the future, the AI is going to be talking to itself. It's going to be thinking. It's going to be internally reflecting, processing," all of which will create more AI workloads, he said. To support AI agents, Nvidia unveiled a new family of language models: The Nvidia Llama Nemotron language foundation models. It uses Meta's open source Llama language model but fine-tuned for enterprise use in an agentic AI future. Nemotron can help developers create and deploy AI agents across various applications, from customer service to fraud detection. "AI agents are the new, digital workforce working for and with us," Huang said. "AI agents are a system of models that reason about a mission, break it down into tasks and retrieve data or use tools to generate a quality response." Nemotron comes in three sizes and capabilities: Nano (cheapest), Super and Ultra (highest accuracy and performance). Nvidia did not disclose their model parameters, or model weights that determine the output. With its focus on robotics, Nvidia also unveiled Cosmos, a platform that comprises generative world foundation models -- AI systems that can create a virtual environment to simulate a real or virtual world. Developers can use Cosmos to generate troves of synthetic data to use for training their 'physical AI' systems, such as robots and autonomous vehicles. Developers can use a text, image or video prompt to create a virtual world. Cosmos is available under an open license, which lets people use, modify or share it. Huang said Cosmos was trained on 20 million hours of video of tasks to teach the AI about the physical world. As such, it can create captions for videos, which can be used to train multimodal AI models. Huang also announced a new partnership with Toyota. The world's largest automaker will be building its vehicles based on Nvidia's Drive AGX Orin, a hardware and software platform that enables advanced driver assistance systems and autonomous driving.
[21]
Let's Be Honest, AI at CES 2025 Was All About Nvidia
Imad is a senior reporter covering Google and internet culture. Hailing from Texas, Imad started his journalism career in 2013 and has amassed bylines with The New York Times, The Washington Post, ESPN, Tom's Guide and Wired, among others. There were plenty of AI announcements at CES 2025 and most didn't really matter. Seriously, a robot vacuum that can pick up dirty socks or TVs that can generate recipes? Meanwhile, Nvidia, the chip maker powering the AI revolution, is openly releasing models to allow robots to function in the real world. Everything else seems trite. CEO Jensen Huang took the stage on Monday and unveiled a foundational AI model named Cosmos that helps robots and autonomous vehicles understand the physical world, calling it "the ChatGPT moment for robotics." He also announced a new chip named Thor for cars and trucks that uses AI to process visual information coming in from cameras and LiDAR sensors to lead the way in level 4 autonomous driving. And Huang revealed the 50-series lineup of gaming and laptop GPUs that promises to deliver massive leaps in performance and "breakthroughs in AI-driven rendering" at a lower cost than the 40 series (in most cases). Sorry, Withings and its AI-powered smart mirror -- try again next year. CES is the largest consumer technology show in North America. Taking place in Las Vegas, It brings in the world's top tech makers to show off devices and concepts that may or may not ever reach consumers. It's also a way for smaller companies to get in front of the press and fans to demo what they've been working on. AI was the major trend last year. When OpenAI launched ChatGPT in late 2022, it showed general consumers what generative AI was capable of. What followed were all the major tech companies releasing AI products of their own and seeing stock valuations jump in the process. Now, seemingly every company is wanting to integrate AI into its products in some fashion as a way to court investor and consumer interest, even as consumers shrug at AI-powered iPhones. Amidst the miasma of AI goop flowing through the showrooms in Las Vegas, the iterative remixes of existing AI tech can end up having a snake oil-like quality. But people choose to stand in long lines to see Huang, who has become a bit of a celebrity in his own right. His quirky announcement videos attract millions of views and his down-to-earth demeanor, plus his adornment of leather jackets, make him a likable hawker of cutting-edge gaming graphics. And it's paying off for Nvidia. In early January of 2023, Nvidia stock hovered around $15. With the AI revolution, companies have been scrambling to buy Nvidia chips over the past two years to power their servers. After Huang's keynote on Monday, the stock hit record highs above $150 before cooling off a bit, but still representing about a tenfold increase in just two years. It's also worth noting how Nvidia's tech has seemingly pushed aside other players in the GPU space. Nvidia is so heavily dominating the GPU market that AMD and Intel have been relegated to competing in the mid-range category. AMD did announce a series of new mid-range GPUs, but changed the naming convention to better match Nvidia. For example, the AMD RX 9070 is taking clear shots at the 5070 cards, making it easier for consumers to compare the two. Intel just recently entered the dedicated GPU card market after failing to meet the moment on the CPU space against Qualcomm, AMD and Nvidia. But it's only trying to carve out a space in the budget GPU category. Thankfully, this past year has shown that AI-hype can only go so far. AI wearables failed to impress and the market cooled on throwing billions at companies releasing AI-polkadotted press releases. Next year's CES will likely have its fair share of AI bloat, most of which will likely be met with yawns -- maybe it should be renamed to Nvidia Greenlight.
[22]
Nvidia founder Jensen Huang unveils new technology for gamers and creators at CES 2025
LAS VEGAS (AP) -- In a packed Las Vegas arena, Nvidia founder Jensen Huang stood on stage and marveled over the crisp real-time computer graphics displayed on the screen behind him. He watched as a dark-haired woman walked through ornate gilded double doors and took in the rays of light that poured in through stained glass windows. "The amount of geometry that you saw was absolutely insane," Huang told an audience of thousands at CES 2025 Monday night. "It would have been impossible without artificial intelligence." The chipmaker and AI darling unveiled its GeForce RTX 50 Series desktop and laptop GPUs -- its most advanced consumer graphics processor units for gamers, creators and developers. The tech is designed for use on both desktop and laptop computers. Ahead of Huang's speech, Nvidia stock climbed 3.4% to top its record set in November. Nvidia and other AI stocks keep climbing even as criticism rises that their stock prices have already shot too high, too fast. Despite worries about a potential bubble, the industry continues to talk up its potential. Huang said the GPUs, which use the company's next-generation artificial intelligence chip Blackwell, can deliver breakthroughs in AI-driven rendering. "Blackwell, the engine of AI, has arrived for PC gamers, developers and creatives," Huang said, adding that Blackwell "is the most significant computer graphics innovation since we introduced programmable shading 25 years ago." Blackwell technology is now in full production, he said. Building on the tech Nvidia released 25 years ago, the company announced that it would also introduce "RTX Neural Shaders," which use AI to help render game characters in deep detail - a task that's notoriously tricky because people can easily spot a small error on digital humans. Huang said Nvidia is also introducing a new suite of technologies that enable "autonomous characters" to perceive, plan and act like human players. Those characters can help players plan strategies or adapt tactics to challenge players and create more dynamic battles. In addition to Nvidia, tech giants such as AMD, Google and Samsung are at CES 2025 to unveil artificial intelligence tools aimed at helping both content creators and consumers alike in their quest for entertainment.
[23]
How Nvidia is creating a $1.4T data center market in a decade of AI - SiliconANGLE
We are witnessing the rise of a completely new computing era. Within the next decade, a trillion-dollar-plus data center business is poised for transformation, powered by what we refer to as extreme parallel computing, or EPC -- or as some prefer to call it, accelerated computing. Though artificial intelligence is the primary accelerant, the effects ripple across the entire technology stack. Nvidia Corp. sits in the vanguard of this shift, forging an end-to-end platform that integrates hardware, software, systems engineering and a massive ecosystem. Our view is that Nvidia has a 10- to 20-year runway to drive this transformation, but the market forces at play are much larger than a single player. This new paradigm is about reimagining compute from the ground up: from the chip level to data center equipment, to distributed computing at scale, data and applications stacks and emerging robotics at the edge. In this Breaking Analysis, we explore how extreme parallel computing is reshaping the tech landscape, the performance of the major semiconductor players, the competition Nvidia faces, the depth of its moat, and how its software stack cements its leadership. We will also address a recent development from CES -- the arrival of so-called "AI PCs" -- with data from Enterprise Technology Research. We'll then look at how the data center market could reach $1.7 trillion by 2035. Finally, we will discuss both the upside potential and the risks that threaten this positive scenario. Our research indicates that every layer of the technology stack -- from compute to storage to networking to the software layers -- will be re-architected for AI-driven workloads and extreme parallelism. We believe the transition from general-purpose, x86, central processing units toward distributed clusters of graphics processing units and specialized accelerators is happening even faster than many anticipated. What follows is our brief assessment of several layers of the data center tech stack and the implications of EPC. For more than three decades, x86 architectures dominated computing. Today, general-purpose processing is giving way to specialized accelerators. GPUs are the heart of this change. AI workloads such as large language models, natural language processing, advanced analytics and real-time inference demand massive concurrency. While storage is sometimes overlooked in AI conversations, data is the fuel that drives neural networks. We believe AI demands advanced, high-performance storage solutions: With mobile and cloud last decade we saw a shift in network traffic from a north-south trajectory (user-to-data center) toward an east-west bias (server-to-server). AI-driven workloads cause massive east-west and north-south traffic within the data center and across networks. In the world of HPC, InfiniBand emerged as the go-to for ultra-low-latency interconnects. Now, we see that trend permeate hyperscale data centers, with high-performance Ethernet as a dominant standard which will ultimately in our view prove to be the prevailing open network of choice: OS and system-level software Accelerated computing imposes huge demands on operating systems, middleware, libraries, compilers and application frameworks. These must be tuned to exploit GPU resources. As developers create more advanced applications -- some bridging real-time analytics and historical data -- system-level software must manage concurrency at unprecedented levels. The OS, middleware, tools, libraries and compilers are rapidly evolving to support ultra-parallel workloads with the ability to exploit GPUs (that is, GPU-aware OSes). Data layer Data is the fuel for AI and the data stack is rapidly becoming infused with intelligence. We see the data layer shifting from an historical system of analytics to a real-time engine that supports the creation of real time digital representations of an organization, comprising people, places and things as well as processes. To support this vision, data harmonization via knowledge graphs, unified metadata repositories, agent control frameworks, unified governance and connectors to operational and analytic systems will emerge. The application layer Intelligent applications are emerging that unify and harmonize data. These apps increasingly have real time access to business logic as well as process knowledge. Single-agent systems are evolving to multi-agent architectures with the ability to learn from the reasoning traces of humans. Applications increasingly can understand human language, are injecting intelligence (in other words, AI everywhere) and supporting automation of workflows and new ways of creating business outcomes. Applications increasingly are becoming extensions to the physical world with opportunities in virtually all industries to create digital twins that represent a business in real time. Key takeaway: Extreme parallel computing represents a wholesale rethinking of the technology stack -- compute, storage, networking and especially the operating system layer. It places GPUs and other accelerators at the center of the architectural design. The above graphic shows the five-year stock performance of major semiconductor players, with the "AI zone" shaded starting in late 2022 -- roughly coinciding with the initial buzz around ChatGPT. Until that timeframe, many were skeptical that large-scale GPU-accelerated AI would become such a powerful business driver. In our view, the market has recognized that semiconductors are the foundation of future AI capabilities, awarding premium multiples to companies that can capture accelerated compute demand. This year, the "haves" (led by Nvidia, Broadcom and AMD) are outperforming, while the "have-nots" (in particular, Intel) are lagging. Nvidia's 65% operating margins have enticed investors and competitors to enter the AI chip market in droves. Both incumbents and new entrants have responded aggressively. Yet the market potential is so large and Nvidia's lead is so substantial that in our view, near-term competition will not hurt Nvidia. Nonetheless, we see multiple angles regarding Nvidia's challengers, each with its own market approach. We align these two leaders because: 1) Broadcom powers custom chips such as Google's tensor processing units or TPUs; and 2) We believe that TPU v4 is extremely competitive in AI. Broadcom's IP around SerDes, optics and networking is best-in-class and together with Google represents in our view the most viable technical alternative relative to Nvidia. Importantly, Broadcom also has had a long-term relationship with Meta and powers its AI chips. Both Google and Meta have proved that AI return on investment in consumer advertising pays off. While many enterprises struggle with AI ROI, these two firms are demonstrating impressive return on invested capital from AI. Both Google and Meta are leaning into Ethernet as a networking standard. Broadcom is a strong supporter of Ethernet and a leading voice in the Ultra Ethernet Consortium. Moreover, Broadcom is the only company other than Nvidia with proven expertise on networking within and across XPUs and across XPU clusters, making the company an extremely formidable competitor in AI silicon. AMD's data center strategy hinges on delivering competitive AI accelerators -- building on the company's track record in x86. Though it has a serious GPU presence for gaming and HPC, the AI software ecosystem (centered on CUDA) remains a key obstacle. AMD has made aggressive moves in AI. It is aligning with Intel to try to keep x86 viability alive. It has acquired ZT Systems to better understand end-to-end AI systems requirements and will be a viable alternative of merchant silicon, especially for inference workloads. Ultimately we believe AMD will capture a relatively small share (single digits) of a massive market. It will manage x86 market declines by gaining share against Intel and make inroads into cost sensitive AI chip markets against Nvidia. Once the undisputed leader in processors, Intel's fortunes have turned amid the shift toward accelerated compute. We continue to see Intel hampered by the massive capital requirements of retaining its own foundry. Amazon's custom silicon approach has succeeded with Graviton in CPU instances. Its acquisition of Annapurna Labs is one of the best investments in the history of enterprise tech. And certainly it's one that is often overlooked. Today, AWS works with Marvell and is applying a Graviton-like strategy to GPUs with Trainium (for training) and Inferentia (for inference). Dylan Patel's take on Amazon's GPU sums it up in our view. Here's what he said on a recent episode of the BG2 pod: Amazon, their whole thing at re:Invent, if you really talk to them when they announce Trainium 2 and our whole post about it and our analysis of it is supply chain-wise... you squint your eyes, This looks like a Amazon Basics TPU, right? It's decent, right? But it's really cheap, A; and B, it gives you the most HBM capacity per dollar and most HBM memory bandwidth per dollar of any chip on the market. And therefore it actually makes sense for certain applications to use. And so this is like a real shift. Like, hey, we maybe can't design as well as Nvidia, but we can put more memory on the package. [Dylan Patel's take on Amazon Trainium] Our view is that AWS' offering will be cost-optimized -- and offer an alternative GPU approach within the AWS ecosystem for both training an inference. Though developers ultimately may prefer the familiarity and performance of Nvidia's platform, AWS will offer as many viable choices to customers as it can and will get a fair share of its captive market. Probably not the penetration it sees with Graviton relative to merchant x86 silicon but a decent amount of adoption to justify the investment. We don't have a current forecast for Trainium at this time, but it's something we're watching to get better data. Microsoft has historically lagged AWS and Google in custom silicon, though it does have ongoing projects, such as Maia. Microsoft can offset any silicon gap with its software dominance and willingness to pay Nvidia's margins for high-end GPUs. Qualcomm is a key supplier for Microsoft client devices. Qualcomm, as indicated, competes in mobile and edge, but as robotics and distributed AI applications expand, we see potential for more direct clashes with Nvidia. Firms such as Cerebras Systems Inc., SambaNova Systems Inc., Tenstorrent Inc. and Graphcore Ltd. have introduced specialized AI architectures. China is also developing homegrown GPU or GPU-like accelerators. However, the unifying challenge remains software compatibility, developer momentum and the steep climb to dislodge a de facto standard. Key takeaway: Though competition is strong, none of these players alone threatens Nvidia's long-term dominance -- unless Nvidia makes significant missteps. The market's size is vast enough that multiple winners can thrive. We see Nvidia's competitive advantage as a multifaceted moat spanning hardware and software. It has taken nearly two decades of systematic innovation to produce an integrated ecosystem that is both broad and deep. Nvidia's GPUs employ advanced process nodes, including HBM memory integration and specialized tensor cores that deliver huge leaps in AI performance. Notably, Nvidia can push a new GPU iteration every 12 to 18 months. Meanwhile, it uses "whole cow" methods -- ensuring that every salvageable die has a place in its portfolio (data center, PC GPUs or automotive). This keeps yields high and margins healthy. The acquisition of Mellanox Technologies Ltd. put Nvidia in control of InfiniBand, enabling it to sell comprehensive end-to-end systems for AI clusters and get to market quickly. The integration of ConnectX and BlueField DPUs extends Nvidia's leadership in ultra-fast networking, a critical component for multi-GPU scaling. As the industry moves toward the Ultra Ethernet standard, many see this as a threat to Nvidia's moat. We do not. Though networking is a critical component of Nvidia's time to market advantage, we see it as a supporting member of its portfolio. In our view, the company can and will successfully optimize its stack for Ethernet as the market demands; and it will maintain its core advantage, which comes from tight integration across its stack. Nvidia's software ecosystem has grown far beyond CUDA to include frameworks for nearly every stage of AI application development. The net result is that developers have more reason to stay within Nvidia's ecosystem rather than seeking alternatives. Jensen Huang, Nvidia's CEO, has frequently underscored the company's emphasis on building a partner network. Virtually every major tech supplier and cloud provider offers Nvidia-based instances or solutions. That broad footprint generates significant network effects, reinforcing the moat. Key takeaway: Nvidia's advantage does not hinge on chips alone. Its integration of hardware and software -- underpinned by a vast ecosystem -- forms a fortress-like moat that is difficult to replicate. CUDA rightly dominates the software discussion, but Nvidia's stack is broad. Below we highlight six important layers: CUDA, NVMI/NVSM (here denoted as "NIMS"), Nemo, Omniverse, Cosmos and Nvidia's developer libraries/toolkits. Compute Unified Device Architecture or CUDA is Nvidia's foundational parallel computing platform. It abstracts away the complexities of GPU hardware and allows developers to write applications in languages like C/C++, Fortran, Python and others. CUDA orchestrates GPU cores and optimizes workload scheduling for accelerated AI, HPC, graphics and more. NIMS focuses on infrastructure-level management: monitoring, diagnostics, workload scheduling and overall hardware health in large-scale GPU clusters. Not strictly a "developer tool," it is nonetheless critical for any enterprise that needs to run advanced AI workloads on thousands of GPUs. NeMo is an end-to-end framework for developing and fine-tuning large language models and natural language applications. It provides pre-built modules, pre-trained models, and the tooling to export those models into other Nvidia products, helping speed time to insight for businesses that want to leverage NLP and large language models. Omniverse is a platform for 3D design collaboration, simulation and real-time visualization. While originally showcased for design engineering and media, Omniverse now extends into robotics, digital twins and advanced physics-based simulations. It leverages CUDA for graphical rendering, combining real-time graphics with AI-driven simulation capabilities. Cosmos is Nvidia's distributed computing framework that streamlines the building and training of massive AI models. By integrating with the firm's networking solutions and HPC frameworks, Cosmos helps scale compute resources horizontally. It allows researchers and developers to unify hardware resources for large-scale training in a more seamless manner. Beyond the core frameworks, Nvidia has developed hundreds of specialized libraries for neural network operations, linear algebra, device drivers, HPC applications, image processing and more. These libraries are meticulously tuned for GPU acceleration -- further locking in the developer community that invests time to master them. Key takeaway: The software stack is arguably the most important factor in Nvidia's sustained leadership. CUDA is only part of the story. The depth and maturity of Nvidia's broader AI software suite forms a formidable barrier to entry for new challengers. Although this Breaking Analysis focuses on data center transformation, we would be remiss not to discuss AI PCs briefly. At CES this year, multiple vendors announced laptops and desktops branded as "AI PCs," often featuring NPUs (neural processing units) or specialized GPUs for on-device inference. Survey data shown above is from ETR in a survey of approximately 1,835 information technology decision makers. The vertical axis is Net Score or spending momentum and the horizontal axis is Overlap or penetration within those 1,835 accounts. The table insert shows how the dots are plotted (Net Score and N). It shows Dell laptops at the top of the share curve with 543 N, with strong spending momentum across Apple, HP, and Lenovo. The plot reveals healthy spending momentum for leading PC suppliers. Currently, the NPU often sits idle in many AI PCs because software stacks are still not fully optimized. Over time, we expect more specialized AI applications on client devices -- potentially enabling real-time language translation, image/video processing, advanced security and local LLM inference at smaller scales. We believe Nvidia, with its track record in GPUs, can offer AI PC technology that is more performant than typical NPUs on mobile or notebooks. However, the power consumption, thermals, and cost constraints remain significant challenges. We do see Nvidia using salvaged "whole cow" dies and building them into laptop GPUs with reduced power envelopes. Although this section deviates from the data center focus, AI PCs could drive developer adoption. On-device AI makes sense for productivity, specialized workloads, and specific vertical use cases. This, in turn, may reinforce the broader ecosystem transition to parallel computing architectures. We have modeled the entire data center market -- servers, storage, networking, power, cooling and related infrastructure -- from 2019 through 2035. Our research points to a rapid transition away from traditional general-purpose computing toward extreme parallel computing. We categorize "extreme parallel computing" as the specialized hardware and software for AI training, inference, HPC clusters and advanced analytics. Currently, we estimate that Nvidia accounts for roughly 25% of the entire data center segment. Our view is that Nvidia will retain that leading share throughout the forecast period -- assuming it avoids unforced errors -- despite intense competition from hyperscalers, AMD and others. Key takeaway: The anticipated shift toward accelerated compute forms the foundation of our bullish stance on data center growth. We believe extreme parallel computing ushers in a multi-year (or even multi-decade) supercycle for data center infrastructure investments. We assert that a new trillion-dollar-plus marketplace is emerging, fueled by AI. The data center -- as we have known it -- will transform into a distributed, parallel processing fabric where GPUs and specialized accelerators become the norm. Nvidia's tightly integrated platform (hardware + software + ecosystem) leads this transition, but it is not alone. Hyperscalers, competing semiconductor firms, and specialized startups all have roles to play in a rapidly expanding market. Despite our positive assessment, we acknowledge several risks: Final word: Nvidia's future looks bright, in our view, but it cannot be complacent. The company's best defense remains relentless innovation in both hardware and software -- a strategy that has carried it to where it is today and will likely drive its continued leadership in this new era of extreme parallel computing.
[24]
Nvidia announces big at CES 2025
The chipmaker made a number of big AI-related announcements aimed at industries and individual consumers. CES, the world biggest annual tech event kicked off in Las Vegas yesterday (6 January) with a number of major announcements from industry giants including Sony, Honda and Dell. However, chipmaker Nvidia made a particular splash with a suite of wide-ranging product announcements and collaborations including the world's smallest AI supercomputer and a partnership with the world-leading automaker Toyota. Here's a breakdown of what Nvidia brought to the table. World's smallest supercomputer The chipmaker announced Project Digits, the world's smallest AI supercomputer, available from May, selling for a hefty retail price of $3,000. Digits will feature the new Nvidia GB10 Grace Blackwell Superchip, giving developers the ability to run up to 200-billion-parameter large language models. According to Nvidia, the supercomputer, which comes with 128GB of unified, coherent memory and up to 4TB of non-volatile memory express storage, will allow users to develop and run inference on models using their own desktop systems and deploy the models on data centre infrastructures. "AI will be mainstream in every application for every industry," said Jensen Huang, the founder and CEO of Nvidia. "With Project Digits, the Grace Blackwell Superchip comes to millions of developers...placing an AI supercomputer on the desks of every data scientist, AI researcher and student empowers them to engage and shape the age of AI." Generative Physical AI Nvidia has announced that it is expanding Omniverse, its 3D graphics collaboration platform, providing more physical AI applications for warehouses and factories. According to Huang, physical AI will "revolutionise" the $50trn manufacturing and logistics industries. "Everything that moves - from cars and trucks to factories and warehouses - will be robotic and embodied by AI," he said. Nvidia's generative AI models are set to accelerate the creation of 3D worlds for physical AI simulation by letting developers use text prompts to generate OpenUSD (universal scene description) assets and automatically label them, enabling developers to "process 1,000 3D objects in minutes". Moreover, at CES yesterday, Nvidia announced "Mega," an Omniverse blueprint for developing, testing and optimising physical AI and robot fleets in a digital twin form before they are deployed into real-world facilities. According to Nvidia, Mega will offer enterprises a suite of Nvidia tech, including the Omniverse to develop digital twins for testing AI-powered robot brains that drive robots, video analytics AI agents and equipment for handling complexity and scale. Automotive giants partner with Nvidia Expanding on AI's physical applications, automakers Toyota, Aurora and Continental have announced partnerships with Nvidia to develop and build their consumer and commercial vehicle fleets on Nvidia's accelerated computing and AI. Toyota has announced plans to build its next-generation vehicles on Nvidia's safety-certified Drive operating system (OS), with the aim of offering vehicles functionally safe and advanced driving assistance capabilities, while Aurora and Continental announced long-term strategic partnerships with Nvidia to deploy driverless trucks at scale, also powered by Nvidia's Drive OS. "The autonomous vehicle revolution has arrived, and automotive will be one of the largest AI and robotics industries," Huang said "Nvidia is bringing two decades of automotive computing, safety expertise and its Cuda AV platform to transform the multi-trillion dollar auto industry." Last October Nvidia announced a partnership with world-leading professional services firm Accenture to expand the adoption of generative AI tools by businesses worldwide. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[25]
Nvidia's Game-Changing CES 2025 Reveal: AI Breakthroughs, New GPUs, and a Bold Toyota Partnership
Nvidia's CEO, Jensen Huang, at the Consumer Electronics Show (CES) in Las Vegas, introduced a series of advancements in artificial intelligence (AI) and computing technologies, coinciding with Nvidia's stock reaching a record high. Nvidia CEO Jensen Huang took center stage at CES 2025, delivering a series of game-changing announcements that solidify the company's leadership in AI and computing. With Nvidia's shares hitting a record high just before the event, Huang revealed cutting-edge technologies like the GeForce RTX 50-series GPUs powered by the Blackwell AI processor, a revolutionary leap for gaming and graphics. He also introduced "Nvidia Cosmos," a platform to develop autonomous vehicles, and "Project Digits," an advanced AI desktop for researchers. Huang unveiled the GeForce RTX 50-series GPUs for desktops and laptops, powered by the new Blackwell AI processor. These GPUs promise significant advancements in AI-driven rendering, enhancing realism in gaming graphics. Huang described this innovation as the most significant in graphics technology since programmable shaders were introduced 25 years ago. Also Read : Love Island's Gabby Allen & Marcel Somerville: From Their Explosive Split to Ex-Wife's Cheating Drama Nvidia introduced Cosmos, a computing platform designed to develop next-generation autonomous vehicles and robots using synthetic data. Early adopters, such as Uber, are already utilizing this technology. Additionally, Huang announced Project Digits, a high-end desktop computer tailored for AI researchers. Priced at $3,000, it is expected to be available by May 2025. In a significant move into the automotive sector, Nvidia revealed a collaboration with Toyota. Toyota's upcoming vehicles will incorporate Nvidia's Drive AGX Orin supercomputer and DriveOS operating system to power advanced driver assistance systems. This partnership aims to enhance the capabilities of autonomous vehicles, with Nvidia projecting its automotive revenue to reach $5 billion in fiscal 2026. Also Read : Human Metapneumovirus Surge in China: Should the US, UK, and Canada Be Concerned or Stay Calm? What did Nvidia announce at CES 2025? Nvidia unveiled new technologies like the GeForce RTX 50-series GPUs, Nvidia Cosmos for autonomous systems, Project Digits for AI researchers, and a major partnership with Toyota for autonomous vehicles. What is special about the GeForce RTX 50-series GPUs? The GeForce RTX 50-series GPUs are powered by Nvidia's Blackwell AI processor, offering advanced AI-driven graphics and improved gaming experiences.
[26]
Nvidia's new GPU series led an avalanche of entertainment-related announcements at CES
In a packed Las Vegas arena, Nvidia founder Jensen Huang stood on stage and marveled over the crisp real-time computer graphics displayed on the screen behind him. He watched as a dark-haired woman walked through ornate gilded double doors and took in the rays of light that poured in through stained glass windows. "The amount of geometry that you saw was absolutely insane," Huang told an audience of thousands at CES 2025 Monday night. "It would have been impossible without artificial intelligence." The chipmaker and AI darling unveiled its GeForce RTX 50 Series desktop and laptop GPUs -- powered by its new Blackwell artificial intelligence chip -- kicking off a string of entertainment-related AI announcements and discussions at the trade show. "Blackwell, the engine of AI, has arrived for PC gamers, developers and creatives," Huang said, adding that Blackwell "is the most significant computer graphics innovation since we introduced programmable shading 25 years ago." Blackwell technology is now in full production, he said. Semiconductor maker AMD unveiled its latest Ryzen 9 and AI series processors Monday morning, boasting unprecedented performance for gamers and content creators. The new chips help AMD to further compete with rivals like Nvidia, Intel and Qualcomm in the budding AI PC space. "With the next generation of AI-enabled processors, we are proliferating AI to devices everywhere, and bringing the power of a workstation to thin and light laptops," said Jack Huynh, senior vice president and general manager of computing and graphics group at AMD. Google, meanwhile, previewed new AI tools for Google TV that use Gemini to make "interacting with your TV more intuitive and helpful." Users, the company said, will be able to have a "natural" conversation with their TVs to ask about things like travel and history, or ask the TV for an overview of the day's news. Samsung also showed off its foray in AI and announced its "Samsung Vision AI" that includes a click to search feature allowing users to do things like identify an actor on screen, and a translation feature that provides real-time subtitles. It also integrates with the rest of the company's smart home ecosystem. SW Yong, president and head of visual display business at Samsung Electronics, said the company sees TVs as "interactive, intelligent partners" rather than "one-directional devices for passive consumption." "We're reimagining what screens can do, connecting entertainment, personalization and lifestyle solutions into one seamless experience to simplify your life," he said. But not all of the AI discussion revolved around gadgetry at CES. Leaders in technology and entertainment discussed current trends in generative AI ahead of Tuesday's conference opener. In one panel discussion on entertainment copyright and AI, some attorneys and experts gave their opinions on whether whether the federal government would pass regulations on the technology this year, especially around the issue of gen-AI created deep fakes. Some believe the courts and individual states would tackle the issue before the government would. "There have been no major decisions on this issue. They will be litigated and tried in the next year or so," said Chad Hummel, an attorney at McKool Smith. Lisa Oratz, an attorney at Perkins Coie who represents clients in the publishing, arts and entertainment industries, acknowledged that AI technology should be regulated but noted it has an "upside." She said many of her tech clients' jobs are being made easier because AI helps alleviate iterative work. "You can make content creation faster, easier and more affordable. You can do things like reduce barriers to entry and democratize content," she said. However, Screen Actors Guild-American Federation of Television and Radio Artists executive director Duncan Crabtree-Ireland said that digital replication was central to their 2023 film and television strike, and that a lack of protections around the unregulated use of AI is core to negotiations between their video game performers and the industry. "It is a tool and it is also an existential threat," he said. © 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.
[27]
Nvidia ramps up AI tech for games, robots and autos
AFP - Nvidia Chief Executive Officer Jensen Huang made a rock star appearance at a packed arena on Monday, touting artificial intelligence (AI) chips and software for robots, cars, video games and more. After years of being on the sidelines at the annual Consumer Electronics Show (CES) in Las Vegas, talk of computer chips was a hot ticket as people queued for hours to fill an arena to hear Huang talk AI. "When you see application after application that is AI driven, at the core of it is that machine learning has changed how computing will be done," Jensen said during a one-man presentation on stage. "There are so many things you can't do without AI." Jensen's keynote came on the eve of the opening of the CES show floor, and on a day that Nvidia shares closed at a new record, giving the Silicon Valley company a market valuation of more than USD3.6 trillion. Nvidia's graphics unit processors (GPUs) for powering AI in data centres have been snapped up by Google, Microsoft, Meta, OpenAI and others racing to be leaders in the technology. During a lengthy presentation in Michelob Ultra Arena at Mandalay Bay resort, Huang introduced a GPU for ramping up AI capabilities in personal computers where Nvidia won the loyalty of gamers in the company's early days. Nvidia touted the new GeForce RTX 50 series for desktop and laptop computers based on Blackwell chip architecture as its most advanced consumer GPUs. "Blackwell, the engine of AI, has arrived for PC gamers, developers and creatives," Huang said. PCs enhanced with RTX chips for AI capabilities will be available from an array of manufacturers including Acer, Dell, HP, Lenovo, Razer and Samsung, according to Nvidia. An AI PC displayed during the presentation was priced at USD1,299, built with the USD549 RTX chip at the starting point of the new GPU line-up. Along with rapid rendering of rich gameplay action, Nvidia AI technology will enable the creation of characters that perceive, plan and act like human players, according to Nvidia. Such autonomous characters are being integrated into games including PUBG: Battlegrounds, according to Nvidia. Huang also introduced a family foundation models open to the world for advancing "physical AI" that enables robots to understand and engage in real-world tasks. Nvidia expanded partnerships and technology for autonomous capabilities in cars as well.
[28]
Nvidia's CES 2025 keynote: Everything you need to know
Euronews Next was at the chip-maker's keynote speech in Las Vegas, which saw a rare appearance by founder and CEO Jensen Huang. Chip maker Nvidia has announced its own foundation models, an artificial intelligence (AI) supercomputer, and a partnership with Toyota for automated driving at the CES technology fair in Las Vegas. In CES' keynote speech on Monday, the company's founder and CEO Jensen Huang came onto the stage wearing his signature black leather jacket that came with a twist as it was adorned with sparkles that matched a massive wafer that had 72 chips on it. The massive wafer has 600,000 parts and 2 miles of copper cable, he said to a packed audience. It comes as part of Nvidia's so-called Project Digits, a personal AI supercomputer that provides AI researchers, data scientists and students worldwide with access to the power of the Nvidia Grace Blackwell. "AI will be mainstream in every application for every industry. With Project Digits, the Grace Blackwell Superchip comes to millions of developers," said Huang. "Placing an AI supercomputer on the desks of every data scientist, AI researcher and student empowers them to engage and shape the age of AI". With the supercomputer, developers can run up to 200-billion-parameter large language models to supercharge AI innovation, the company said. The technology will be available in May and prices start from $3,000 (€2,800). Huang said the next stage of AI is physical AI, which the company says means autonomous machines can perceive, understand, and perform complex actions in the real world. "Physical AI will revolutionise the $50 trillion (€48 trillion) manufacturing and logistics industries," Huang said. "Everything that moves -- from cars and trucks to factories and warehouses -- will be robotic and embodied by AI". Nvidia also announced a partnership with Toyota to bring automated driving capabilities to new vehicles. "The autonomous vehicle revolution has arrived, and automotive will be one of the largest AI and robotics industries," said Huang. Autonomous vehicle companies Aurora and Continental also this week announced a long-term partnership to deploy driverless trucks at scale, powered by Nvidia Drive, the company's AI-assisted driving platform. Another way Nvidia is betting on physical AI is with its own foundation models. Nvidia announced a platform of world foundation models designed for robotics called Cosmos. These are open and available from Nvidia, GitHub, and Hugging Face. Huang also announced a "family" of new chips for desktop and laptop PCs that use the company's fastest AI processors for data centres Called the RTX 50-series, the graphics cards cost up to $1,999 (€1,900). The chips can be used to run AI models but also computer graphics, and can use AI to boost gaming frame rates. Nvidia said the new processors will also be powerful enough to to run large language models (LLMs).
[29]
Nvidia CEO set to take stage at CES just after shares hit record high
(Reuters) - Nvidia Chief Executive Jensen Huang is set to deliver the opening keynote speech at CES later on Monday and will likely unveil new videogame chips and detail efforts to parlay the company's success in artificial intelligence into other markets outside of the data center. Huang typically uses CES as a platform to announce new videogame chips and unveil a flurry of new plans to expand its AI business. CES 2025, formerly known as the Consumer Electronics Show, runs Jan. 7-10 in Las Vegas and is used to debut products ranging from new automotive technology to quirky gadgets, as well as showing new ways to use artificial intelligence. Nvidia's stock closed at a record high of $149.43 on Monday, bringing its valuation to $3.66 trillion, making it the world's second-most valuable listed company behind Apple. Nvidia's booming valuation has come from the rapid growth of its data center business, where firms such as OpenAI use its chips to develop AI technology. Analysts expect that part of Nvidia's business to hit $113 billion in sales this fiscal year, according to LSEG data. That is more than double the $47.5 billion figure in fiscal 2024. Nvidia still has a substantial consumer business selling graphics processing units to PC gamers, a business that analysts expect to reach $11.77 billion this year. Nvidia still leads the market in gaming chips, where it competes with Advanced Micro Devices and to a lesser extent Intel. Last year, Nvidia unveiled its Blackwell AI server architecture at its developer conference in March. Its new line of graphics processing units (GPUs) will likely be based on similar Blackwell technology. New videogame graphics chips typically boast improved performance and image quality. Nvidia is also increasingly looking to translate its lead in data centers into the broader PC market by positioning its gaming chips as useful in corporate PCs and laptops for handling AI work such as chatbots and "agents" that can help carry out business tasks. That puts the company in direct competition with firms such as Intel and Qualcomm, which are hoping that AI features will spark a new round of PC upgrades. (Reporting by Max Cherney and Stephen Nellis in San Francisco; Editing by Matthew Lewis)
[30]
Nvidia chief calls robots 'multitrillion-dollar' opportunity for next stage of growth
Nvidia is on the cusp of revolutionising robotics through artificial intelligence, chief executive Jensen Huang said on Monday, as he outlined his vision for the next stage of the company's staggering growth. Huang announced a range of new products and partnerships in the "physical AI" space, including AI models for humanoid robots and a major deal with Toyota to use Nvidia's self-driving car technology, during his keynote speech at the annual Consumer Electronics Show in Las Vegas. Nvidia has flown past a $3tn market capitalisation off the back of demand for its AI chips to become one of the world's most valuable companies. Huang in turn has become a household name, more than 30 years after he founded Nvidia as a video game graphics chip company. Massive queues had formed outside the Mandalay Bay convention centre long before the keynote started, with some people still lining up when Huang emerged onstage in a sparkly version of his trademark leather jacket, quipping: "I'm in Las Vegas after all." Outside semiconductors, Nvidia has been building the software that allows companies to train and deploy robots, from those used in smart factories and warehouses to self-driving cars and humanoids, pushing to expand the use cases for AI running on its chips. Cracking the technological challenges involved in deploying robots at scale will pave the way to "the largest technology industry the world has ever seen", said Huang. Nvidia said the field of robotics had reached a technological tipping point, as AI accelerates and fine-tunes the process of simulating the physical world and generating the vast amounts of data needed to train robots. In the next two decades, the market for humanoid robots alone is expected to reach $38bn, according to the company. On Monday, Nvidia announced a suite of foundational AI models on its new Cosmos platform, which developers can use for free to generate data and build their own models. Nvidia said the foundation models, which it said were trained on 20mn hours of video data, were as fundamental a technological development as the large language models that underpin apps such as OpenAI's ChatGPT. It pairs with Nvidia's Omniverse platform, which is used to run simulations of the physical world. "What [those models] are doing for language, we can now do for understanding the physical world," Rev Lebaredian, Nvidia's vice-president for Omniverse and simulation technology, told the Financial Times. While data on the physical world is much harder to gather and process than text, Lebaredian said "it's a necessary part" of the company's mission. "The big takeaway [from Huang's CES speech] is that this moment is going to be a special one," he added. "I think this year is an inflection point where we're going to see this acceleration of physical AI and robotics." The Omniverse platform and robotics currently represent a small share of the company's overall revenue. For Nvidia's quarter to the end of October, "professional virtualisation" accounted for $486mn of revenue, while automotive and robotics totalled $449mn. This is a sliver of overall sales, as the company raked in $30.8bn in revenue from selling chips for the data centres that power AI models in the same period. Nvidia's search for new markets comes as it faces growing pressure from its biggest customers, including Amazon and Microsoft, which are rushing to build their own in-house AI data centre chips. Bank of America analysts said Nvidia's decision to double down on "physical AI" was the "next logical step". The challenge would be in "making the products reliable enough, cheap enough and pervasive enough to spawn credible business models", they added. At CES, Nvidia also unveiled a collection of foundation models for humanoid robots, called the "GR00T Blueprint", which it said would "supercharge" the development of robots, as well as new tools for developing and testing fleets of factory and warehousing robots and training autonomous vehicles. Toyota announced it would build its next generation of autonomous vehicles on Nvidia's hardware and software, known as Drive AGX. Self-driving car group Aurora and automotive parts maker Continental will use Nvidia's hardware and software to power thousands of driverless trucks under their long-term strategic partnerships with the chipmaker. Nvidia said it expected its automotive business to grow to $6bn in the 2026 fiscal year. Autonomous vehicles "will be the first multitrillion-dollar robotics industry", Huang told the CES audience. Separately, Nvidia said it would release a "personal AI supercomputer" with its latest and most powerful AI chip, Blackwell, allowing researchers and students to run multibillion-parameter AI models locally rather than through the cloud. It will be available in May at an initial price tag of $3,000. Nvidia shares were flat in after-hours trading on Monday but have risen more than 11 per cent since the start of the year, putting it within striking distance to overtake Apple as the most valuable US-listed company.
[31]
Everything NVIDIA CEO Jensen Huang announced at its CES 2025 keynote
CEO Jensen Huang talked DLSS 4, RTX 5090 and Project Digits, a mini AI PC. NVIDIA held its CES 2025 keynote last night with CEO Jensen Huang and it was surprisingly eventful. The company finally unveiled its much awaited GeForce RTX 5000 GPUs that promise a considerable performance uplift, to start with. The company didn't stop there, also announcing Project Digits, a personal AI supercomputer, along with DLSS 4 and more. Here's a wrap-up of what happened -- and you can watch the whole event uncut, via the YouTube embed below. (Spoiler alert: It's more than 90 minutes long.) Huang strode out in a new snakeskin-like leather jacket and revealed the much-anticipated RTX 5090 GPU. With 32GB of GDDR7 RAM and an impressive 21,760 CUDA cores, the new flagship can deliver up to twice as much relative performance, particularly for ray-tracing (RT) intensive games like Cyberpunk 2077. In fact that particular title ran at 234 fps with full RT on in a video demo, compared to 109 fps on the RTX 4090. It's not cheap, though, priced at $1,999. The company also revealed the $549 RTX 5070 with a far more modest 6,144 CUDA cores and 12GB of DDR7 RAM, along with the $749 RTX 5070 Ti and $999 RTX 5080. A key part of the RTX 5000-series launch was the introduction of DLSS 4, the latest version of the company's real-time image upscaling technology. It features a new technology called Multi Frame Generation that allows the new GPUs to generate up to three additional frames for every one frame the GPU produces via traditional rendering -- helping multiply frame rates by up to eight times. It also represents what NVIDIA calls the "biggest upgrade to its AI models" since DLSS 2, improving things like temporal stability and detail, while reducing artifacts like ghosting. Finally, NVIDIA launched Project Digits, a "personal AI supercomputer" designed for AI researchers, data scientists and students. It uses NVIDIA's new GB10 Grace Blackwell superchip, providing up to a petaflop of performance for testing and running AI models. The company says a single Project Digits unit can run models 200 billion parameters in size, or multiple machines can be linked together to run up to 405 billion parameter models. And for its intended audience, Project Digits is relatively cheap at $3,000. On top of all that, the company introduced NVIDIA Cosmos world foundation models for robot and AV development, the NVIDIA DRIVE Hyperion AV platform for autonomous vehicles and AI Foundation models for RTX PCs "that supercharge digital humans." It's all explained in the video above and NVIDIA's CES 2025 keynote blog. CES -- and Huang's keynote -- are happening against the backdrop of continued volatility in the company's stock price. NVIDIA shares (ticker NVDA) spiked ahead of Huang's address, closing on Monday just shy of Apple's market cap pinnacle. But Tuesday saw a reversal, with the stock down more than 6 percent. Still, some are betting it's a toss up between the two tech giants as to which will hit the $4 trillion market valuation first. Update, January 7 2025, 4:18PM ET: This story has been updated with new details on Nvidia's stock price.
[32]
Like it or not, Nvidia stole the show at CES 2025
CES 2025 Read and watch our complete CES coverage here Updated less than 2 minutes ago Great, here's the entitled journalist telling me that the $2,000 graphics card won CES 2025. I've seen plenty of strong opinions about Nvidia's CES announcements online, but even ignoring the bloated price of the new RTX 5090, Nvidia won this year's show. And it kind of won by default. Between Intel's barebones announcements and an overstuffed AMD presentation that ignored what might be AMD's most important GPU launch ever, it's not surprising that Team Green came out ahead. But that's despite the insane price of the RTX 5090, not because of it. Recommended Videos Nvidia introduced a new range of graphics cards, and the impressive multi-frame generation of DLSS 4, but its announcements this year were much more significant than that. It all comes down to the ways that Nvidia is leveraging AI to make PC games better, and the fruits of that labor may not pay off immediately. There are the developer-facing tools like Neural Materials and Neural Texture Compression, both of which Nvidia briefly touched on during its CES 2025 keynote. For me, however, the standout is neural shaders. They certainly aren't as exciting as a new graphics card, at least on the surface, but neural shaders have massive implications for the future of PC games. Even without the RTX 5090, that announcement alone is significant enough for Nvidia to steal this year's show. Get your weekly teardown of the tech behind PC gaming ReSpec Subscribe Check your inbox! Privacy Policy Neural shaders aren't some buzzword, though I'd forgive you for thinking that given the force-feeding of AI we've all experienced over the past couple of years. First, let's start with the shader. If you aren't familiar, shaders are essentially the programs that run on your GPU. Decades ago, you had fixed-function shaders; they could only do one thing. In the early 2000s, Nvidia introduced programmable shaders that had far greater capabilities. Now, we're starting with neural shaders. In short, neural shaders allow developers to add small neural networks to shader code. Then, when you're playing a game, those neural networks can be deployed on the Tensor cores of your graphics card. It unlocks a boatload of computing horsepower that, up to this point, had fairly minimal applications in PC games. They were really just fired up for DLSS. Introducing NVIDIA RTX Kit: Transforming Rendering with AI and Path Tracing Nvidia has uses for neural shaders that it has announced so far -- the aforementioned Neural Materials and Neural Texture Compression, and Neural Radiance Cache. I'll start with the last one because it's the most interesting. The Neural Radiance Cache essentially allows AI to guess what an infinite number of light bounces in a scene would look like. Now, path tracing in real time can only handle so many light bounces. After a certain point, it becomes too demanding. Neural Radiance Cache not only unlocks more realistic lighting with far more bounces but also improves performance, according to Nvidia. That's because it only requires one or two light bounces. The rest are inferred from the neural network. Similarly, Neural Materials compresses dense shader code that would normally be reserved for offline rendering, allowing what Nvidia calls "film-quality" assets to be rendered in real time. Neural Texture Compression applies AI to texture compression, which Nvidia says saves 7x the memory as traditional block-based compression without any loss in quality. That's just three applications of neural networks being deployed in PC games, and there are already big implications for how well games can run and how good they can look. It's important to remember that this is the starting line, too -- AMD, Intel, and Nvidia all have AI hardware on their GPUs now, and I suspect there will be quite a lot of development on what kinds of neural networks can go into a shader in the future. Maybe there are cloth or physics simulations that are normally run on the CPU that can be run through a neural network on Tensor cores. Or maybe you can expand the complexity of meshes by inferring triangles that the GPU doesn't need to account for. There are the visible applications of AI, such as through non-playable characters, but neural shaders open up a world of invisible AI that makes rendering more efficient, and therefore, more powerful. It's easy to get lost in the sauce of CES. If you were to believe every executive keynote, you would walk away with literally thousands of "ground-breaking" innovations that barely manage to move a patch of dirt. Neural shaders don't fit into that category. There are already three very practical applications of neural shaders that Nvidia is introducing, and people much smarter than myself will likely dream up hundreds more. I should be clear, though -- that won't come right away. We're only seeing the very surface of what neural shaders could be capable of in the future, and even then, it'll likely be multiple years and graphics card generations down the road before their impact is felt. But when looking at the landscape of announcements from AMD, Nvidia, and Intel, only one company introduced something that could really be worthy of that "ground-breaking" title, and that's Nvidia.
[33]
Nvidia's tiny $3,000 computer steals the show at CES
Nvidia CEO Jensen Huang was greeted as a rock star this week CES in Las Vegas, following an artificial intelligence boom that's made the chipmaker the second most-valuable company in the world. At his nearly two-hour keynote on Monday kicking off the annual conference, Huang packed a 12,000-seat arena, drawing comparisons to the way Steve Jobs would reveal products at Apple events. Huang concluded with an Apple-like trick: a surprise product reveal. He presented one of Nvidia's server racks and, using some stage magic, held up a much smaller version, which looked like a tiny cube of a computer. "This is an AI supercomputer," Huang said, while donning an alligator skin leather jacket. "It runs the entire Nvidia AI stack. All of Nvidia's software runs on this." Huang said the computer is called Project Digits and runs off a relative of the Grace Blackwell graphics processing units (GPUs) that are currently powering the most advanced AI server clusters. The GPU is paired with an ARM-based Grace central processing unit (CPU). Nvidia worked with Chinese semiconductor company MediaTek to create the system-on-a chip called GB10. Formerly known as the Consumer Electronics Show, CES is typically the spot to launch flashy and futuristic consumer gadgets. At this year's show, which started on Tuesday and wraps up on Friday, several companies announced AI integrations with appliances, laptops and even grills. Other major announcements included a laptop from Lenovo which has a rollable screen that can expand vertically. There were also new robots, including a Roomba competitor with a robotic arm.
[34]
Everything Nvidia announced at CES 2025
Nvidia CEO Jensen Huang at the CES 2025. Credit: Bridget Bennett / Bloomberg / Getty Images At the Nvidia keynote at CES 2025, CEO Jensen Huang didn't waste anytime showing off the new GeForce RTX 50 Series. Huang walked onstage carrying the graphics card to a round of applause. This was the most anticipated moment of the Nvidia event, but not the only big announcement. The AI computing company integral to the rise of generative AI had many more cards to play at the Las Vegas tech conference. Nvidia is now building its own AI models, fueling robotics and autonomous vehicle development, and bringing some of the most powerful computing tools to the masses. Here's everything that was announced at the Nvidia keynote. The big news of course was Nvidia's new GPUs, the GeForce RTX 50 Series. The graphics cards are underpinned by Nvidia's new RTX Blackwell architecture and consist of the flagship GeForce RTX 5090 as well as the GeForce RTX 5080, 5070 Ti, and 5070. The RTX 50 series is powered by 92 billion transistors, which gives it 3,352 trillion AI operations per second (TOPS) and boasts 1.8TB/s of memory bandwidth. Mashable's Chance Townsend and Alex Perry have the full details on specs, availability, and pricing, but rest assured, it's "just a beast," as Huang put it. The graphics card giant is getting into the world model game with the introduction of Nvidia Cosmos. World models are the underlying technology for robotics training. And Nvidia has made its Cosmos World Foundation Models (Cosmos WFM) available as an open license platform available on Github, granting broader access to robotics developers that previously lacked these resources or expertise. "The ChatGPT moment for general robotics is just around the corner," said Huang. Nvidia also introduced AI foundation models for LLM development. AI foundation models for RTX PCs are "offered as Nvidia NIM microservices" and use the GeForce RTX 50 Series GPUs. Additionally, Huang shared the top manufacturers are launching PCs that support NIM with its new graphics cards, adding "AI PCs are coming to a home near you." Another NIM microservice announcement introduced Llama Nemotron family of LLMs. Llama Nemotron uses Meta's open-source Llama models are primed for agentic capabilities and "excel at instruction following, chat, function calling, coding and math, while being size-optimized to run on a broad range of NVIDIA accelerated computing resources," according to the announcement. Llama 3.1 Nemotron 70B is now available in Nvidia's API catalog. In keeping with the theme of empowering developers with access to powerful computing tools, Nvidia unveiled Project Digits. The device is a supercomputer about the size of a Mac mini that easily sits on a desk and plugs into a keyboard and monitor. With its GB10 Grace Blackwell Superchip, Digits can run up to 200-billion-parameters LLMs without the need for cloud infrastructure. And it's $3,000 a pop, which in the grand scheme of things, is a pretty accessible price point for small businesses and solo developers. Project Digits is expected this coming May. Nvidia has also been working hard in the autonomous vehicle department, introducing the DRIVE Hyperion AV platform, powered by the AGX Thor system-on-a-chip (SoC). DRIVE Hyperion is an "end-to-end autonomous driving platform," that includes the SoC, sensors, safety systems, and a DriveOS operating system that car manufacturers can use to build their autonomous vehicles. Nvidia also shared that Toyota joins its growing list of partners that includes Mercedes-Benz, Jaguar Land Rover, and Volvo using its AV platform.
[35]
Everything Announced at Nvidia's CES Event in 12 Minutes - Video
At CES 2025, Nvidia CEO Jensen Huang kicks off CES, the world's largest consumer electronics show, with a new RTX gaming chip, updates on its AI chip Grace Blackwell and its future plans to dig deeper into robotics and autonomous cars. Here it is. Our brand new GForce RTX 50 series, Blackwell architecture, the GPU is just a beast, 92 billion transistors, 4000 tops, 4 petaflops of AI, 3 times higher than the last generation Ada, and we need all of it to generate those pixels that I showed you. 380 ray tracing teraflops so that we could for the pixels that we have to compute, compute the most beautiful image you possibly can and of course 125 shader teraflops. There's actually a concurrent shader teraflops as well as an integer unit of equal performance. So two dual shaders, one is for floating 0.1 is for integer. G7 memory from micron 1.8 terabytes per second, twice the performance of our last generation, and we now have the ability to intermix AI workloads with computer graphics workloads. And one of the amazing things about this generation is the programmable shader is also able to now process neural networks. So the shader is able to carry these neural networks and as a result, we invented. Neuro texture compression and neural material shading with the Blackwell family RTX 5070, 4090 performance at 5:49. Impossible without artificial intelligence, impossible without the four tops, 4 tear ops of AI tensor cores. Impossible without the G7 memories. OK, so 5070, 4090 performance, $549 and here's the whole family starting from 5070 all the way up to $5090 5090 dollars, twice the performance of a 4090. Starting Of course we're producing a very large scale availability starting January. Well, it is incredible, but we managed to put these gigantic performance GPUs into a laptop. This is a 5070 laptop for 1299. This 5070 laptop has a 4090 performance. And so the 5090, the 5090. Will fit into a laptop, a thin laptop. That last laptop was 14, 4.9 millimeters. You got a 5080, 5070 TI and 5070. But what we basically have here is 72 Blackwell GPUs or 144 dies. This one chip here is 1.4 exaflops. The world's largest supercomputer, fastest supercomputer, only recently. This entire room supercomputer only recently achieved an exaflop plus. This is 1.4 exaflops of AI floating point performance. It has 14 terabytes of memory, but here's the amazing thing the memory bandwidth is 1.2 petabytes per second. That's basically, basically the entire. Internet traffic that's happening right now. The entire world's internet traffic is being processed across these chips, OK? And we have, um, 10 130 trillion transistors in total, 2,592 CPU cores. Whole bunch of networking and so these I wish I could do this. I don't think I will so these are the blackwells. These are our ConnectX. Networking chips, these are the MV link and we're trying to pretend about the the the MV link spine, but that's not possible, OK. And these are all of the HBM memories, 1214 terabytes of HBM memory. This is what we're trying to do and this is the miracle, this is the miracle of the black wall system so we fine tune them using our expertise and our capabilities and we turn them into the Llama Nemotron suite of open models. There are small ones that interact in uh very very fast response time extremely small uh they're uh what we call super llama Nemotron supers they're basically your mainstream versions of your models or your ultra model, the ultra model could be used uh to be a teacher model for a whole bunch of other models. It could be a reward model evaluator. Uh, a judge for other models to create answers and decide whether it's a good answer or not, basically give feedback to other models. It could be distilled in a lot of different ways, basically a teacher model, a knowledge distillation, uh, uh, model, very large, very capable, and so all of this is now available online and Via Cosmos, the world's first. World foundation model. It is trained on 20 million hours of video. The 20 million hours of video focuses on physical dynamic things, so dynamic nature, nature themes themes, uh, humans, uh, walking, uh, hands moving, uh, manipulating things, uh, you know, things that are, uh, fast camera movements. It's really about teaching the AI, not about generating creative content, but teaching the AI to understand the physical world and from this with this physical AI. There are many downstream things that we could uh do as a result we could do synthetic data generation to train uh models. We could distill it and turn it into effectively to see the beginnings of a robotics model. You could have it generate multiple physically based, physically plausible, uh, scenarios of the future, basically do a Doctor Strange. Um, you could, uh, because, because this model understands the physical world, of course you saw a whole bunch of images generated this model understanding the physical world, it also, uh, could do of course captioning and so it could take videos, caption it incredibly well, and that captioning and the video could be used to train. Large language models. Multimodality large language models and uh so you could use this technology to uh use this foundation model to train robotics robots as well as large language models and so this is the Nvidia cosmos. The platform has an auto regressive model for real-time applications as diffusion model for a very high quality image generation. It's incredible tokenizer basically learning the vocabulary of uh real world and a data pipeline so that if you would like to take all of this and then train it on your own data, this data pipeline because there's so much data involved we've accelerated everything end to end for you and so this is the world's first data processing pipeline that's could accelerated as well as AI accelerated all of this is part of the Cosmos platform and today we're announcing. That Cosmos is open licensed. It's open available on GitHub. Well, today we're announcing that our next generation processor for the car, our next generation computer for the car is called Thor. I have one right here. Hang on a second. OK, this is Thor. This is Thor This is This is a robotics computer. This is a robotics computer takes sensors and just a madness amount of sensor information, process it, you know. Umpteen cameras, high resolution radars, LIDARs, they're all coming into this chip, and this chip has to process all that sensor, turn them into tokens, put them into a transformer, and predict the next path. And this AV computer is now in full production. Thor is 20 times. The processing capability of our last generation Orin, which is really the standard of autonomous vehicles today. And so this is just really quite, quite incredible. Thor is in full production. This robotics processor, by the way, also goes into a full robot and so it could be an AMR, it could be a a a human or robot, uh, it could be the brain, it could be the, uh, manipulator, uh, this this processor basically is a universal robotics computer. The chat GPT moment. For general robotics is just around the corner. And in fact, all of the enabling technologies that I've been talking about is. Going to make it possible for us in the next several years to see very rapid breakthroughs, surprising breakthroughs in in general robotics. Now the reason why general robotics is so important is whereas robots with tracks and wheels require special environments to accommodate them. There are 3 robots. 3 robots in the world that we can make that require no green fields. Brown field adaptation is perfect. If we, if we could possibly build these amazing robots, we could deploy them in exactly the world that we've built for ourselves. These 3 robots are one agentic robots and agentic AI because you know they're information workers so long as they could accommodate uh the computers that we have in our offices, it's gonna be great. Number 2, self-driving cars, and the reason for that is we spent 100+ years building roads and cities. And then number 3, human or robots. If we have the technology to solve these 3. This will be the largest technology industry the world's ever seen. This is Nvidia's latest AI supercomputer. And, and it's finally called Project Digits right now and if you have a good name for it, uh, reach out to us. Um, uh, this here's the amazing thing, this is an AI supercomputer. It runs the entire Nvidia AI stack. All of Nvidia software runs on this. DGX cloud runs on this. This sits Well, somewhere and it's wireless or you know connected to your computer, it's even a workstation if you like it to be and you could access it you could you could reach it like a like a cloud supercomputer and Nvidia's AI works on it and um it's based on a a super secret chip that we've been working on called GB 110, the smallest Grace Blackwell that we make, and this is the chip that's inside. It is it is in production. This top secret chip, uh, we did in collaboration with the CPU, the gray CPU was a, uh, is built for Nvidia in collaboration with MediaTech. Uh, they're the world's leading SOC company, and they worked with us to build this CPU, the CPU SOC, and connected with chip to chip and the link to the Blackwell GPU, and, uh, this little, this little thing here is in full production. Uh, we're expecting this computer to uh be available uh around May time frame.
[36]
LIVE | CES 2025: NVIDIA's 'World Foundation Model' aims at making images, text into tasks for robots
CEO Jensen Huang delivered the opening keynote on January 6, 2025, focusing on new developments in gaming, AI, robotics, and automotive technology. NVIDIA introduced the GeForce RTX 5090, anticipated to be the most powerful single GPU with significant performance improvements over the RTX 4090. Additionally, NVIDIA unveiled the "World Foundation Model," aimed at transforming images and text into actionable tasks for robots.
[37]
Watch NVIDIA CEO Jensen Huang deliver his CES 2025 keynote live here
GeForce RTX 5000? More Blackwell AI chips? All will be revealed tonight at 9:30PM ET. NVIDIA founder and CEO Jensen Huang had an absolutely spectacular 2024. Consider that the chip giant's stock price finished last year up 178 percent, and its market cap of more than $3.66 trillion dollars -- with a T -- is currently second only to Apple. That's thanks to the fact that the ongoing AI revolution is powered largely by NVIDIA processors, which is raking in billions on its hardware even as its customers stay firmly in the red. But that's why his task at CES 2025 in Las Vegas is daunting. He needs to outline a 2025 game plan that can somehow one-up the past 12 months. You can watch Jensen Huang's CES keynote as it happens right here, and catch Engadget's real-time live blog commentary as well. Huang will be taking the stage at the Mandalay Bay on Monday, January 8 at 9:30PM ET. Ahead of Huang's keynote, the Engadget team is also covering the Samsung and Sony CES 2025 press conferences in our liveblog. You can follow along below. Rival chip giants Intel, Qualcomm and AMD have already had their bite at the CES apple, and now it's NVIDIA's turn. For 2025, look for the inevitable sequels, with rumors suggesting a blazing fast RTX 5090 for starters. Of course, Wall Street will be more focused on the details Huang will undoubtedly share on the status of NVIDIA's AI hardware. We're likely to hear more news on the company's Blackwell AI chips, which should begin shipping in greater volume this year after first entering the market in late 2024.
Share
Share
Copy Link
At CES 2025, Nvidia CEO Jensen Huang introduced the concept of "Agentic AI," forecasting a multi-trillion dollar shift in work and industry. The company unveiled new AI technologies, GPUs, and partnerships, positioning Nvidia at the forefront of the AI revolution.
At the Consumer Electronics Show (CES) 2025, Nvidia CEO Jensen Huang unveiled the company's vision for "Agentic AI," heralding a new era in artificial intelligence. Huang described this as a "multi-trillion-dollar opportunity" that could revolutionize work across industries 12.
Agentic AI refers to intelligent agents capable of assisting with tasks across various sectors. Unlike generative AI, which creates content, these agents can work autonomously and take initiative. Huang predicted that "AI agents are the new digital workforce," suggesting that every company's IT department will essentially become an HR department for AI agents 13.
Nvidia introduced the Nemotron family of large language models, built on Meta's Llama model, to enable developers to create and deploy AI agents for specific business needs. Applications range from customer support to fraud detection and inventory management optimization 4.
Nvidia showcased several new products and technologies:
Major companies are already adopting Nvidia's new technologies:
Nvidia's stock surged to a record high following these announcements, briefly surpassing Apple's market value 4. The company's Blackwell chip family alone is expected to create a $100 billion market opportunity 4.
Huang emphasized the rapid adoption of AI agents, stating, "Starting next year, if a software engineer in your company is not assisted with an AI [agent] you are losing already fast" 5.
While the potential of Agentic AI is immense, concerns remain about its impact on employment and the need for responsible development. Huang's vision of AI replacing roles in industries like nursing and audio engineering has raised ethical questions 2.
As Nvidia continues to dominate the AI chip market, competition is emerging. Companies like Amazon are developing alternatives to reduce dependence on Nvidia's products 2.
Nvidia's CES 2025 presentation solidified its position as a leader in the AI revolution. With its comprehensive approach to both hardware and software development, the company is poised to shape the future of AI across multiple industries. As the concept of Agentic AI evolves, its impact on the global economy and workforce remains a subject of both excitement and scrutiny.
Reference
[2]
Nvidia's remarkable growth in the AI chip market faces potential hurdles as the industry grapples with diminishing returns from traditional scaling methods, prompting a shift towards new approaches like test-time scaling.
4 Sources
4 Sources
Nvidia CEO Jensen Huang acknowledges DeepSeek's R1 AI model as an excellent innovation, emphasizing increased demand for AI computing power. Despite market concerns, Nvidia reports record earnings and remains confident in its position in the AI industry.
42 Sources
42 Sources
Nvidia's meteoric rise in the AI chip market faces scrutiny as competitors emerge and market dynamics shift. This story explores the company's current position, future prospects, and potential challenges in the evolving AI landscape.
4 Sources
4 Sources
AI is transforming the gaming industry, from game creation to hardware advancements. This story explores how AI is being used to develop PC games and Nvidia's latest AI-focused innovations.
2 Sources
2 Sources
Nvidia's CEO Jensen Huang reignites the AI trade, but concerns arise about the company's market valuation and its impact on the broader tech sector. As Nvidia's stock experiences a correction, investors and analysts reassess the AI boom's sustainability.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved