Curated by THEOUTPOST
On Tue, 16 Jul, 12:02 AM UTC
2 Sources
[1]
Will AI "dream up" PC games in the future? AMD thinks so
This story is part of Jacob Roach's ReSpec series, covering the world of PC gaming and hardware. Last week, I was in Los Angeles at AMD's Ryzen 9000 Tech Day, digging into the Zen 5 architecture and AMD's upcoming CPUs. But out of everything I heard about architecture, an offhand comment about how AI will "dream up" future PC games stood out to me the most. Contents The visionLoving the mundaneSomething that solves problemsPushing graphics forward The comment came from Sebastien Nussbaum, the computing and graphics chief architect for AMD. Nussbaum was laying out a vision of AI in the future, and among talks of AI assistants and features like Windows Recall, he talked about how AI could "dream up" the lighting in PC games in future. The obvious next question: how? Recommended Videos Sure, we've seen some applications of AI in games, from Nvidia's DLSS to features like G-Assist and AI characters through ACE. But AI just imagining lighting in games? That sounds like a stretch. I sat down with Chris Hall, senior director of software development at AMD, to understand the winding road leading to an AI future of PC games. As it turns out, it sounds like we're a lot further down that road than I thought we were. Get your weekly teardown of the tech behind PC gaming ReSpec Subscribe Check your inbox! Privacy Policy The vision "I think you need to look at a technology like Sora," Hall started when I sat down to speak with him. "Sora wasn't even trained to think about 3D. They call it a latent capability... it somehow developed a model of what a 3D world looks like. Naturally, you look at that and say, 'surely that's what a game is going to be like in the future.'" Hall is referring to OpenAI's video generator, called Sora. It can create video from a text prompt, and with elements like proper occlusion of objects (where one object passes in front of another). It's not a video game, not even close. But when you look at what something like Sora can do, with its ability to understand and interpret a 3D world, it's not hard to let your imagination run wild. "[AI] requires a completely different mindset" We're not even close to something like Sora for games today, which is something Hall was upfront about. But the software developer says that AMD -- and I'm sure the rest of the gaming industry -- is researching the technologies that could eventually lead to that point. It won't come easily, though. "This is a huge change for the game industry," Hall said. "This requires a completely different mindset." Hall spoke about upending the traditional graphics pipeline we know today, something that requires years of research and work to accomplish. According to him, however, that shift has already started. "You can see some of these foundational pieces are already compatible, and there will be steps along the way," Hall told me. "We already have denoising in games today, and if you think about Stable Diffusion-type technologies, the diffusion process itself is literally denoising an image toward a final outcome. So it's almost like guided denoising." There's a gradual shift that the gaming industry is going through. There may be a future where, years down the line, AI can work its way into every part of the graphics pipeline. That change, Hall said, won't come with flashy generative AI features that we've seen through dozens of tech demos. It will show up in the mundane aspects of game development that players will probably never think of. Loving the mundane We've seen applications of generative AI in games already. There's Nvidia's Project G-Assist most recently, as well as AI-driven NPCs through Convai. They grab headlines, but Hall said the real innovation right now is happening in the mundane. "AI has got all the headlines for the fancy things -- the LLMs, the Stable Diffusions, right? A lot of people today think that's all AI is. But if you look at Epic Games and the Unreal Engine, they were showing off their ML [machine learning] cloth simulation technology just a couple of months ago," Hall said. "Seems like a very mundane, uninteresting use case, but you're really saving a lot of compute by using a machine learning model." Hall is referring to Unreal's Neural Network Engine (NNE), which moved into a beta stage in April with the release of Unreal Engine 5.4. It provides an interface for developers to run neural network models in a variety of places in games -- Epic Games provides tooling, animation, rendering, and physics as possible examples. Instead of generating a game with AI, the application here is to make more impressive visuals more efficiently. "That's a case where using machine learning to do something that requires a lot of compute traditionally frees up a lot of GPU cycles for what the game companies really care about, which is incredible visuals," Hall said. We can already see that in action today in games like Alan Wake 2. The game supports Nvidia's Ray Reconstruction, which is an AI-powered denoiser. Denoising is already a shortcut for rendering demanding scenes in real time, and Ray Reconstruction is using AI to provide better results at real-time speeds. Hall suggested that similar applications of AI in game development is what will push a new level of visual quality. "You're going to see a lot of those things. Like, 'oh, we can use it for this physics simulation, we can use it for that simulation,'" Hall said. The problem right now is how to get the resources to run these models. NNE is a great framework, but it's only set up to work on the CPU or GPU. And Nvidia's DLSS features are excellent, but they require a recent, expensive Nvidia GPU with dedicated Tensor cores. At the moment, there are duct tape solutions for getting these features working while companies like AMD, Nvidia, and Intel lay the hardware foundation to more efficiently support these features. And they are laying that foundation, make no mistake. We have Tensor cores in Nvidia GPUs, of course, and Intel has its XMX cores in its graphics cards. AMD has AI accelerators inside RDNA 3 GPUs like the RX 7800 XT, as well, which don't have a clear use at this point. There's a classic chicken-and-egg problem with AI acceleration in games right now, but AMD, Nvidia, and Intel are all working to solve that problem it seems. "If I add this AI feature in, am I blocking a whole set of generations of hardware that can't support this feature? And how do I fall back on those? These are the things that seem to be making the incorporation of ML a slower process," Hall told me. Still, the process is happening. Recently, for example, AMD introduced neural texture compression, and both Intel and Nvidia submitted similar research last year. "It's not flashy, it's not a central feature of the game, but it's a use of ML that's otherwise hard to do analytically, and the improvement for the user is the loading time and the amount of data you need to deliver to the user's PC," Hall said. "So, I think you'll see more of those." Something that solves problems When you hear things like AI dreaming up aspects of a game, it all starts to feel a little dystopian. If you keep up with the tech news, it's easy to feel like companies are shoving AI into everything they possibly can -- and with a box that claims to let you smell video games through AI, it's hard not to get that picture. Hall's view of AI in PC games is more grounded. It's not about shoving AI in places it doesn't belong. It's about solving problems. "AAA and even AA game development is not a six-month process. It's planned out a long time in advance. You're always trying to bite off more than you can chew, so you're always under the gun on schedule. Adding a new, extra complex thing on top, it really needs to solve a problem that exists," Hall told me. "It will take some deep pockets on the part of game publishers." Game development is not only a long process but also an expensive one. It can cost hundreds of millions of dollars to produce a AAA game, and even with companies like Square Enix haphazardly putting AI in a game demo, there isn't a game developer dropping that kind of a money on a feature that could ruin a game. We've also seen companies like Ubisoft demo the capabilities of AI NPCs in games, but those haven't shown up in an actual game. And they likely won't for quite some time. "I think the reality of [AI NPCs] is that... you know, games are very carefully scripted experiences, and when you have an essentially unbounded experience with this character in the game, you've got to put a lot of effort into making sure that doesn't break the scripted elements of your game," Hall said. "So NPCs are nice to have, but it's not going to change how many games sell. Whereas a new graphics feature where you've offloaded more from the GPU... well, that's interesting." I have no delusions about the reality of game development. Large companies spending hundreds of millions of dollars will find a way to shortcut that process by reducing the headcount and relying on AI to pick up the slack. I just don't suspect it will work. There's a lot of potential for AI in PC games, but that potential comes from achieving the next major leap in visual quality, not reducing the experiences we love to something a machine can understand. "It will take some deep pockets on the part of game publishers and some brave souls to make that leap, but it's inevitable because this technology is inevitable," Hall said. Pushing graphics forward AI sets up an interesting dynamic when it comes to creation. It either allows companies to do the same work with less people, or it allows them to do more work with the same amount of people. The hope is that AI can push PC gaming forward. That next big leap is what companies like AMD are looking toward, Hall said. "We were all enjoying Moore's Law for a long time, but that's sort of tailed off. And now, every square millimeter of silicon is very expensive, and we can't afford to keep doubling," Hall told me. "We can, we can build those chips, we know how to build them, but they become more expensive. Whereas ML is kind of breaking that cost per effect by allowing us to use approximate computing to deliver a result that would otherwise take something perhaps four or five times more powerful." "It's truly a research project. It's just that we know now that it will be possible." I was caught up on the idea of AI dreaming up the lighting in my games. It felt wrong, like another case of shoehorned AI in an experience that's supposed to be crafted by a person. My conversation with Hall brought a different perspective. The idea of pushing the medium forward is exciting, and a tool like AI allows the industry to get a major leap without resorting to GPUs that cost thousands of dollars and consume thousands of watts of power. Hall summed up the process nicely by looking back. "There will be a before and an after, and I suppose it will be similar to the early days of 3D. Remember when Nintendo shipped the [Super Nintendo] Star Fox cartridge with a little mini triangle rasterizer in it so that it could do 3D? They had to add some silicon to their existing console to deliver a new experience." Especially as someone who covers the tech industry every day, I find myself getting cynical about the tech. Like any powerful technology, there are plenty of negative implications of AI. It doesn't sound like that's the work going on behind the scenes right now, though. It sounds like small, localized applications to provide a better experience for players and (hopefully) developers, as well. And, over time, maybe AI can dream up something special. It won't be free, though. "What gets us from here to that endpoint? Honestly, it's a lot of trial and error and a lot of hard work. It's truly a research project. It's just that we know now that it will be possible," Hall said.
[2]
Nvidia Doubles Down On AI And Taiwan At Computex 2024
At Computex 2024 last month, Nvidia made a big splash with the first keynote of the show, held offsite at the athletics stadium of National Taiwan University. Nvidia, like many other vendors at the show, doubled down on its position in AI, both in the datacenter and inside AI PCs. While much of what CEO Jensen Huang talked about during the opening keynote was focused on datacenter and cloud AI, there were still a plethora of other announcements from the company focused on its PC gaming business and on injecting more AI into gaming. These new AI PC use cases also seemed to focus heavily on utilizing a hybrid AI approach leveraging both cloud and local AI compute to deliver a more advanced gaming experience. During the keynote, Nvidia didn't introduce many new concepts that it hadn't already covered during its presentations at CES or GTC earlier this year. It did update some of its power targets for GB200 NVL72 racks, now claiming a more efficient 100 kilowatts per rack instead of the previously quoted 120 kilowatts. Huang also showed what a NVLink spine looked like and struggled to carry it on stage to demonstrate the size and weight of the interconnect. Nvidia did give more visibility into the company's future roadmap all the way out to 2027. The company has updated its cadence of GPU launches to an annual schedule, which is an acceleration from its old 18- to 24-month cadence. This means that we should get Blackwell Ultra in 2025, Rubin GPU and Vera CPU in 2026 and Rubin Ultra in 2027. Do keep in mind that Vera will likely be paired with Rubin, much like Grace is paired with Blackwell and Hopper. (Fun fact: just as the Grace and Hopper components were named for the computer scientist Admiral Grace Hopper, the Vera and Rubin components are named for the astronomer Dr. Vera Rubin, who did pioneering studies of galactic rotation.) So, we can expect VR to be the future nomenclature for those platforms, likely in a VR100 and VR200 configuration. While Nvidia has been dominant in both enterprise and cloud AI environments, it has struggled to communicate its capabilities on the PC. This is especially true with the advent of the AI PC. Ironically, Nvidia was one of the first companies to bring AI capabilities to the PC via its GPUs with technologies such as DLSS, which has been pivotal in enabling real-time ray tracing. That aside, Nvidia has been making lots of product announcements this year to beef up its AI PC story, including with the launch of ChatRTX. To further improve its on-device capabilities, Nvidia took what was once an April Fool's Joke and turned it into a real beta with Project G-Assist, which is a GeForce AI Assistant. I got to experience this firsthand in the Nvidia suites at Computex and was really impressed with what it could do and how much it enhanced the gaming experience. Capabilities ranged from helping adjust graphics settings to walking the user through certain game intricacies and answering questions about objects in the in-game line of sight. Nvidia also demonstrated the next generation of its ACE digital human platform, which is powered for on-device use by Nvidia NIMs that add more intelligence and interactivity to NPCs in open-world and RPG titles. Nvidia ACE's demos were a great way for the company to demonstrate its hybrid AI capabilities to improve performance and latency. At Computex, the company also announced that it is working with Microsoft to deliver Copilot+ PC specs in a new category of laptops using RTX 4070 GPUs paired with other vendors' SoCs. My understanding is that most of those will be SoCs that have dedicated NPUs, such as the AMD Ryzen AI 300 series or perhaps Intel's Lunar Lake, but it remains unclear when these configurations will debut. To further back up Nvidia's AI PC story, the company also noted that Windows Copilot Runtime would be adding GPU acceleration for local PC SLMs. This, paired with Nvidia's RTX AI Toolkit, should make access to Nvidia's GPUs much easier for third-party developers building Windows AI applications, further strengthening the company's AI PC story. AI PC darlings Adobe, Blackmagic Design and Topaz are already onboard to take advantage of the RTX AI Toolkit for their apps, and I'm excited to see how all these apps will leverage both GPU and NPU optimizations to maximize performance. Nvidia has been working hard this year to strengthen its position on the client side of the AI equation, especially on the AI PC. Meanwhile, Nvidia's position in AI for the cloud and the datacenter is dominant, and with an accelerated annual cadence it's quite clear that the company will be difficult to catch up with. What I want to see from Nvidia in the future is a better end-to-end AI story that showcases its strengths as hybrid AI continues to become a more prevalent model for AI consumption. Nvidia has already released its latest RTX 4000 Super family of GPUs this year, but even now there are rumors about the next-generation 5000 series, which will likely lean even further into AI capabilities such as frame generation and other rendering techniques. As touched on above, Nvidia's role in Copilot+ PCs could also evolve with time if current rumors are any indication; I'll be looking especially closely for products in that vein early next year.
Share
Share
Copy Link
AI is transforming the gaming industry, from game creation to hardware advancements. This story explores how AI is being used to develop PC games and Nvidia's latest AI-focused innovations.
The gaming industry is witnessing a revolutionary shift as artificial intelligence (AI) takes center stage in game development. Recent advancements have enabled AI to "dream up" entire PC games, potentially transforming the creative process for game designers and developers 1. This breakthrough allows for the rapid generation of game concepts, storylines, and even basic gameplay mechanics, significantly reducing the time and resources required in the initial stages of game creation.
At the forefront of this AI revolution in gaming is Nvidia, a company known for its graphics processing units (GPUs). At Computex 2024, Nvidia doubled down on its commitment to AI and strengthened its ties with Taiwan, a crucial hub for semiconductor manufacturing 2. The company unveiled new AI-focused hardware and software solutions, further solidifying its position as a leader in the AI-driven gaming landscape.
The integration of AI in game development is not limited to initial concept creation. AI algorithms are being employed to generate realistic textures, create dynamic environments, and even develop non-player characters (NPCs) with more sophisticated behaviors. This level of AI involvement is expected to lead to more immersive and responsive gaming experiences, as games can adapt in real-time to player actions and preferences.
Nvidia's latest innovations showcase the increasing importance of specialized hardware in AI-driven gaming. The company's new GPUs are designed to handle the complex computations required for AI-powered game features, such as real-time ray tracing and AI-enhanced graphics upscaling. These advancements are set to push the boundaries of visual fidelity and performance in PC gaming.
While the potential of AI in game development is immense, it also raises questions about the role of human creativity in the process. Game designers and developers are now exploring ways to collaborate with AI tools, using them as aids to enhance their creative vision rather than replace it entirely. This hybrid approach aims to combine the efficiency of AI with the nuanced understanding and artistic touch that human creators bring to game development.
As AI continues to evolve, its impact on the gaming industry is expected to grow exponentially. From procedurally generated content to AI-driven narrative experiences, the future of gaming looks increasingly intelligent and adaptive. The collaboration between human developers and AI tools is likely to result in more diverse, complex, and engaging games that can cater to a wide range of player preferences and play styles.
Reference
[1]
An analysis of the emerging AI PC market, focusing on Microsoft's Copilot+ PCs, chip manufacturers' strategies, and the challenges faced in consumer adoption.
5 Sources
5 Sources
At CES 2025, Nvidia CEO Jensen Huang introduced the concept of "Agentic AI," forecasting a multi-trillion dollar shift in work and industry. The company unveiled new AI technologies, GPUs, and partnerships, positioning Nvidia at the forefront of the AI revolution.
37 Sources
37 Sources
NVIDIA unveils its new Blackwell architecture and RTX 50 Series GPUs, promising significant advancements in AI capabilities for consumer PCs, content creation, and gaming.
3 Sources
3 Sources
Nvidia introduces its new RTX 50 series graphics cards, featuring the Blackwell architecture and advanced AI capabilities, promising significant performance improvements for gaming and content creation.
3 Sources
3 Sources
NVIDIA introduces the RTX Kit, a suite of neural rendering technologies set to revolutionize gaming graphics. The kit includes AI-powered shaders, texture compression, and advanced rendering techniques, promising significant improvements in visual quality and performance.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved