The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Wed, 8 Jan, 4:03 PM UTC
2 Sources
[1]
Nvidia CEO: PC games will never be entirely rendered by AI
A day after launching the most hotly anticipated product in the PC world, the Nvidia GeForce 50-series family of graphics cards, Nvidia chief executive Jensen Huang appeared on stage at CES to answer reporters' questions. A key one: In a world where AI is increasingly used to generate or interpolate frames, is the end result a world in which PC graphics is entirely AI generated? No, Huang replied. There's a reason we asked Huang the question. Nvidia says that while DLSS 3 could inject AI-generated frames between every GPU-rendered frame, DLSS 4 can infer three full frames off of a single traditional frame, as Brad Chacos noted in our earlier report on the GeForce 50-series reveal. A day earlier, rival AMD was essentially asked the same question. "Can I tell you that in the future, every pixel is going to be ML [machine learning]-generated? Yes, absolutely. It will be in the future," AMD's Frank Azor, its chief architect of gaming solutions, replied. Huang disagreed. "No, he replied to the question, asked by PCWorld's Adam Patrick Murray. "The reason for that is because just remember when ChatGPT first came out, and we said, Oh, now let's just generate the book. But nobody currently expects that. "And the reason for that is because you need to give it credit," Huang continued. "You need to give it -- it's called condition. We now condition the chat or the prompts with context. Before you can answer a question, you have to understand the process. The context could be PDF, the context could be a web search. The context could be you told it exactly what the context is, right? "And so the same thing goes with video games. You have to give a context. And the context for video games has to not only be story-wise relevant, but it has to be spatial, world, spatially relevant. And so the way you condition, the way you give it context, is you give it some early pieces of geometry, or early pieces of textures, and it could generate, it could up the rest." In ChatGPT, the context is called Rapid Retrieval, Augmented Generations [RAG], the context which guides the textual output. "In the future, 3D graphics would be 3D grounded condition generation," he said. In DLSS 4, Nvidia's GPU rasterization engine only renders one of out of the four forward-looking frames, Huang said. "And so out of four frames, 33 million pixels, we only rendered two [million]. Isn't that a miracle?" The key, Hunag said, is that they have to be rendered precisely: "precisely the right ones, and from that conditioning we can generate the others." "The same thing is going to happen in video games in the future I just described will be, will happen to not just the pixels that we render, but the geometry that we render, the animation that we render, you know, and the hair we render in future video games." Huang apologized if his explanation was poor, but concluded that there is still and will always be a role for artists and rendering in video games. "But it took that long for everybody to now realize that generative AI is really the future, but you need to condition, you need to ground with the author, the artists, [and the] intention."
[2]
Will there ever become a point with AI where there are no traditionally rendered frames in games? Perhaps surprisingly, Jen-Hsun says 'no'
Then goes on to speak rather poetically about inspirational pixels. The new DLSS 4 Multi Frame Generation feature of the new RTX Blackwell cards has created a situation where one frame can be generated using traditional GPU computation, while the subsequent three frames can now be entirely generated by AI. That's a hell of an imbalance, so does one of the people responsible for making this AI voodoo a reality think we'll get to a point where there are no traditionally rendered frames? Jen-Hsun Huang, Nvidia's CEO, and one of the biggest proponents of AI in just about damned near everything, says: no. During a Q&A session today at CES 2025 Jen-Hsun was asked the question about whether we're likely to get purely AI generated game frames as the entirety of a game pipeline and he was unequivocal in his assertion to the contrary, stating that it's vital for AI to be given grounding, to be given context in order to build out its world. In other words, AI still needs something to build from. In gaming terms Huang suggests that it works in the same way as we give context to ChatGPT. "The context could be a PDF, it could be a web search... and the context in video games has to not only be relevant story-wise, but it has to be world and spatially relevant. And the way you condition, the way you give it context is you give it early pieces of geometry, or early pieces of textures it could up-res from." He then brings it back around to DLSS 4 and Multi Frame Generation, and the example of one rendered 4K frame and three further 4K game frames. "Out of 33 million pixels," says Huang, "we render two [million]. Isn't that a miracle? We literally rendered two and we generated 31. The reason why that's a big deal is because those two million pixels have to be rendered precisely and from that conditioning we can generate the other 31. "Those two million pixels can be rendered beautifully using tons of computation because the computing that we would have applied to 33 million pixels we now channel it directly at two. And so those two million pixels are incredibly beautiful and they inspire and form the other 31." And that's kinda the most lovely way I've ever heard someone speak about upscaling and frame generation, as inspiration for AI-generated pixels. Aww.
Share
Share
Copy Link
Nvidia's CEO Jensen Huang discusses the future of AI in game rendering, emphasizing the continued importance of traditional rendering techniques alongside AI-generated content.
In a recent CES Q&A session, Nvidia CEO Jensen Huang made a bold statement about the future of video game graphics, asserting that games will never be entirely rendered using AI 1. This declaration comes in the wake of Nvidia's launch of the GeForce 50-series graphics cards, which showcase advanced AI capabilities in frame generation.
Nvidia's latest DLSS 4 technology demonstrates the growing influence of AI in graphics processing. The system can now generate three full frames for every one traditionally rendered frame, significantly reducing the workload on the GPU 1. This advancement has led some to speculate about a future where game graphics are entirely AI-generated.
Interestingly, Huang's perspective contrasts with that of AMD's Frank Azor, who had previously suggested that in the future, every pixel could be machine learning-generated 1. This divergence in opinion highlights the ongoing debate about the extent of AI's role in future gaming technologies.
Huang emphasizes the critical role of context and conditioning in AI-generated content. Drawing parallels with ChatGPT and other language models, he explains that AI needs a foundation to build upon:
"You need to give it credit," Huang stated. "It's called condition. We now condition the chat or the prompts with context. Before you can answer a question, you have to understand the process." 1
In the realm of video games, this context includes not only story elements but also spatial and world-relevant information. Huang suggests that early pieces of geometry or textures serve as the necessary grounding for AI to generate the rest 2.
Highlighting the capabilities of DLSS 4, Huang describes a scenario where out of four frames (33 million pixels), only two million are traditionally rendered. He refers to this as a "miracle," emphasizing the precision required in rendering these key frames 2.
"Those two million pixels can be rendered beautifully using tons of computation because the computing that we would have applied to 33 million pixels we now channel it directly at two. And so those two million pixels are incredibly beautiful and they inspire and form the other 31," Huang explained 2.
While acknowledging the growing role of AI, Huang maintains that there will always be a place for artists and traditional rendering in video game development. He suggests that future games will see AI applied not just to pixel rendering, but also to geometry, animation, and other aspects of game design 1.
This balanced approach indicates a future where AI and traditional rendering techniques coexist, each playing crucial roles in creating immersive gaming experiences. As the technology continues to evolve, the gaming industry will likely see further integration of AI, but always grounded in the vision and intention of human creators.
AI is transforming the gaming industry, from game creation to hardware advancements. This story explores how AI is being used to develop PC games and Nvidia's latest AI-focused innovations.
2 Sources
2 Sources
PlayStation co-CEO Hermen Hulst discusses the potential impact of AI on video game development, emphasizing the importance of balancing AI innovation with human creativity.
7 Sources
7 Sources
NVIDIA introduces the RTX Kit, a suite of neural rendering technologies set to revolutionize gaming graphics. The kit includes AI-powered shaders, texture compression, and advanced rendering techniques, promising significant improvements in visual quality and performance.
2 Sources
2 Sources
Nvidia unveils AI-powered NPCs at CES 2025, sparking debate about the future of gaming and the role of human creativity in game development.
3 Sources
3 Sources
Nvidia introduces its new RTX 50 series graphics cards, featuring the Blackwell architecture and advanced AI capabilities, promising significant performance improvements for gaming and content creation.
3 Sources
3 Sources