110 Sources
110 Sources
[1]
Nvidia CEO tries to explain why DLSS 5 isn't just "AI slop
Last week, Nvidia's public reveal of DLSS 5 -- and its "generative AI" enhanced glow-ups of gaming scenes -- drew widespread condemnation from the gaming community. In a podcast published Monday, though, Nvidia CEO Jensen Huang tried to differentiate the technology's optional, artist-guided graphical enhancements from the "AI slop" that Huang says he's not a fan of. As part of a nearly two-hour-long interview with the Lex Fridman Podcast, Huang was asked to explain the "drama" around DLSS 5 and "the gamers online [that] were concerned that it makes games look like AI slop." Huang responded that he "could see where they're coming from, because I don't love AI slop myself... all of the AI-generated content increasingly looks similar and they're all beautiful, so... I'm empathetic towards what they're thinking." At the same time, Huang said DLSS 5 is decidedly separate that kind of "slop," because it "is 3D conditioned, 3D guided." The artists behind a game are still the ones creating the in-game structural geometry and textures that form the "ground truth structure" that DLSS 5 works from, Huang said. "And so every single frame, it enhances but it doesn't change anything," he said. For the most part, though, gamers haven't been worried about DLSS 5 creating trippy new content from the ground up like some generative AI world models. Instead, the worry is that DLSS 5's visual "enhancements" could end up smoothing out many disparate games toward a single, flattened, homogenized photo-realism standard. That's a misunderstanding of how DLSS 5 works, Huang said. It's not a technology where a game ships in one state and "then we're gonna post-process it," he said. Instead, DLSS 5 "is integrated with the artist, and so it's about giving the artist the tool of AI, the tool of generative AI." Because DLSS 5 is "open," Huang said artists can train the model for the specific kind of look they want. In the future, Huang said artists will also be able to prompt DLSS 5 with examples or a description of a desired look -- "I want it to be a toon shader," for instance. And if visual artists want to use DLSS 5's models "to generate the opposite of photoreal, yeah, it'll do that too," he said. The interview follows similar comments Huang gave in an interview with Tom's Hardware last week, when he said "it's not post-processing at the frame level, it's generative control at the geometry level." But any "confusion" on this point from gamers is entirely understandable since previous versions of DLSS were explicitly sold as relatively turnkey post-processing to improve resolutions and/or frame rates by generating new frames that look like the ones rendered by the game itself. If Nvidia wanted to introduce a new artist-facing tool for using generative AI to create customizable shader-style effects, they could have done so without overturning the existing and well-known meaning and branding of DLSS. Elsewhere in the new podcast, Huang threw in an aside that if artists don't like the look of DLSS 5's enhancements, "they could decide not to use it, you know?" Whether or not individual artists want to use DLSS 5, though, Nvidia's announced partnerships with major publishers including Bethesda, Capcom, NetEase, NCSoft, Tencent, Ubisoft, and Warner Bros. Games. suggest that the companies behind many big-budget games will be pushing the technology's use in their projects at least for the time being. And while gamers will also be able to disable DLSS 5's "enhancements" if they don't like the way they look, that's not exactly the affirmative defense of the technology that generative AI boosters may have been hoping for out of Nvidia. We still have months to go until Nvidia's planned debut of DLSS 5 in any actual games. We expect Huang and others at Nvidia will be spending a good deal of that time trying to explain and justify the new technology to a skeptical gaming public.
[2]
Gamers react with overwhelming disgust to DLSS 5's generative AI glow-ups
Since deep-learning super-sampling (DLSS) launch on 2018's RTX 2080 cards, gamers have been generally bullish on the technology as a way to effectively use machine learning upscaling techniques to increase resolutions or juice frame rates in games. With yesterday's tease of the upcoming DLSS 5, though, Nvidia has crossed a line from mere upscaling into complete lighting and texture overhauls influenced by "generative AI." The result is a bland, uncanny gloss that has received an instant and overwhelmingly negative reaction from large swaths of gamers and the industry at large. While previous DLSS releases rendered upscaled frames or created entirely new ones to smooth out gaps, Nvidia calls DLSS 5 -- which it plans to launch in Autumn -- "a real-time neural rendering model" that can "deliver a new level of photoreal computer graphics previously only achieved in Hollywood visual effects." Nvidia CEO Jensen Huang said explicitly that the technology melds "generative AI" with "handcrafted rendering" for "a dramatic leap in visual realism while preserving the control artists need for creative expression." Unlike existing generative video models, which Nvidia notes are "difficult to precisely control and often lack predictability," DLSS 5 uses a game's internal color and motion vectors "to infuse the scene with photoreal lighting and materials that are anchored to source 3D content and consistent from frame to frame." That underlying game data helps the system "understand complex scene semantics such as characters, hair, fabric and translucent skin, along with environmental lighting conditions like front-lit, back-lit or overcast," the company says. "When you absolutely, positively, don't want any art direction..." You can see how DLSS 5 reinterprets that frame data for yourself in both Nvidia's announcement video and in a detailed Digital Foundry breakdown (which notes that the demo currently makes use of two RTX 5090s, with one completely dedicated to DLSS 5). But while Digital Foundry described the "transformational lighting" effects as "astonishing" numerous times in its write-up, the reaction from the rest of the gaming world has been overwhelmingly negative so far. Many of the reactions have focused on how DLSS 5 turns in-game faces into overly detailed, uncanny valley versions of the original models. Reactions have compared the effect to air-brushed pornography, "yassified, looks-maxed freaks", or those uncanny, unavoidable Evony ads. Others have noted how DLSS 5 seems to mangle the intended art direction by dampening shadows in favor of a homogenized look. Some game developers have leapt on the "artistic intent" angle, too. Thomas Was Alone developer Mike Bithell added that the technology seems designed "for when you absolutely, positively, don't want any art direction in your gaming experience." And Gunfire Games Senior Concept Artist Jeff Talbot said, "in every shot the art direction was taken away for the senseless addition of 'details.' Each DLSS 5 shot looked worse and had less character than the original. This is just a garbage AI Filter." New Blood Interactive founder and CEO Dave Oshry added that DLSS 5's "AI dogshit is actually depressing" and lamented that future generations "won't even know this looks 'bad' or 'wrong' because to them it'll be normal." Damage control By way of damage control, Nvidia took to the comments on its YouTube reveal trailer (which are filled with thousands of negative takes) to stress that DLSS 5 "is not a filter" and that "game developers have full, detailed artistic control over DLSS 5's effects to ensure they maintain their game's unique aesthetic." That includes the ability to tweak intensity and color grading or turn the masking off entirely in "places where the effect shouldn't be applied." DLSS 5 OFF // DLSS 5 ON [image or embed] -- Tuba Zef (@tubazef.com) March 16, 2026 at 3:59 PM Bethesda, one of the many major publishers Nvidia named as an early DLSS 5 partner, piped in on social media to say that these videos are "a very early look, and our art teams will be further adjusting the lighting and final effect to look the way we think works best for each game. This will all be under our artists' control, and totally optional for players." From a public image perspective, though, the damage may already be done. DLSS 5 has quickly become a meme format in its own right, with Internet commenters using "DLSS 5 On" as visual shorthand for "overly cleaned up" or "mangled beyond recognition." We're not sure how you come back from that kind of instant infamy, but Nvidia will have until the fall to find out.
[3]
Gamers Hate Nvidia's DLSS 5. Developers Aren't Crazy About It Either
Nvidia's new AI upscaling gaming technology struck gamers as uncanny and off-putting. Developers don't seem to like it either, but it could be "the default" in a few years. Nvidia announced a new version of its DLSS AI upscaling technology for its graphics cards earlier this week at its GPU Technology Conference (GTC), which it calls the Super Bowl of AI. But unlike previous versions of DLSS that used AI to improve frame rates in video games, DLSS 5 has a much more ambitious calling: using generative AI to make character faces in games look more realistic and detailed. The demonstration received sharp blowback on social media, with many finding the effect off-putting, reacting with outright disgust, and calling it yet another example of AI slop. DLSS, or deep-learning super-sampling, is a feature Nvidia introduced on its graphics cards in 2018. The primary use has been to improve frame rates in video games by rendering games at a lower resolution, then using AI to upscale the quality. More recent versions of DLSS insert AI-generated frames in between actual rendered frames. These techniques use less computing power than generating the full frames, allowing for better gaming performance without taxing your PC's hardware and maintaining visual fidelity. The feature can be turned on or off. "From a technical standpoint, it's quite an achievement," Kevin Bates, CEO and creator of the open source retro gaming handheld Arduboy, wrote in a message to WIRED. "I would have expected a cloud-based rendering service to provide it. The fact they expect to distill it down to what can run on a single [graphics] card later this year is insane." But DLSS 5 has crossed a generative-AI rubicon. Instead of just being a tool Nvidia provides developers, it manifests as actual visual changes without their consent. While you can still turn it on or off in your video games, the technology has some developers -- not just gamers -- worried. Nvidia showed off a demo of the tech on games like Capcom's Resident Evil Requiem, Ubisoft's Assassin's Creed, and Bethesda's Starfield. The company says it's meant to improve the graphics and generate photorealistic details and lighting. The demo seemed to largely improve the lighting, which detractors compared to the glow of a ring light just out of frame. Faces became far more detailed, even introducing new facial features. It was also criticized on social media for over sexualizing characters, where people called the look "yassified," or "porn faces," and compared the effect to Instagram or Snapchat's glamour filters, which smooth out imperfections on a person's image. Gamers did not approve. The Verge called it motion smoothing, but worse. The tech also has other issues, like introducing unexpected artifacts in real time. You can see some of those problems in the official demo video itself. In a scene in a FIFA game where a soccer ball is being kicked into a net, the ball has weird artifacts on it with DLSS 5 on, looking like a piece of the net is on the foreground of the ball before it has even gone in the goal. (Pause the video at 59 seconds.) People's facial features, like the female character in Resident Evil Requiem, have some slight but noticeably different facial features: Larger eyes, fuller lips, and a completely different nose. "It devalues an artist's creativity and intent on a basic level," says James Brady, a video game artist and designer who has worked on games like Call of Duty: Modern Warfare 3. "All this takes away from the artist's original design intent on the character and its shape language, with what pretty much functions on a surface level as a 'Snapchat filter.'"
[4]
Nvidia Teases DLSS 5 and Gamers Aren't Impressed
Expertise Video Games | Misinformation | Conspiracy Theories | Cryptocurrency | NFTs | Movies | TV | Economy | Stocks Nvidia opened its GTC conference with a keynote by CEO Jensen Huang, revealing the company's latest tech. Among the raft of the company's AI developments, gamers were treated to the imminent version of its AI-powered upscaling and optimization technology, DLSS (Deep Learning Super Sampling), touted as the "biggest breakthrough in computer graphics". Nvidia published a video illustrating how DLSS 5 can enhance graphics in Resident Evil Requiem, Starfield and other games, showing before-and-after takes. But gamers weren't thrilled. In fact, the response to DLSS 5 resembles more of a collective backlash, replete with memes, ridicule and outrage. Gamers were quick to point out that DLSS 5 transformed the original graphics into something vastly different. Some called the visuals "AI slop" because they look like "yassified" AI-generated filters. Many worry that DLSS 5 could deviate from a creator's specific artistic vision. Critics also fear that if this technology becomes the industry standard, video game graphics might start to look the same, losing their unique visual identity. "Everything about this is a betrayal of these game's artistry," said YouTuber The Sphere Hunter in a post on X Monday. "Painting over handcrafted, intentional 3D art with shiny, wrinkly, sunken-in, porous, puckered, fraudulent, filtered nonsense is deeply disrespectful. If you want this, just watch gen-AI videos all day." Countless memes mocking the tech's exaggerated features flooded the internet. Others on social media parodied the effects DLSS 5 could produce in other games. In a Q&A on Tuesday, Huang addressed the backlash from gamers, calling them "completely wrong." Huang underlined that DLSS 5 "enhances and adds generative capability, but it doesn't change the artistic control" and that "it's in the direct control of the game developer." The team at Digital Foundry, which specializes in game technology and hardware reviews, called it "disruptive and transformative" but was generally positive about it, though they saw some hiccups. "[The images] looked a little bit uncanny, I would say, but definitely the overall portrayal of those characters is much more sophisticated," said Oliver Mackenzie, video producer and writer for Digital Foundry. Bethesda's official X account replied to comments from members of Digital Foundry about Starfield and The Elder Scrolls IV: Oblivion Remastered, both published by Bethesda. "This is a very early look, and our art teams will be further adjusting the lighting and final effect to look the way we think works best for each game. This will all be under our artists' control, and totally optional for players," the publisher said. DLSS 5 is set to be released sometime in the fall. Nvidia first released its DLSS tech back in 2018 with its RTX 2080 card: The RTX architecture introduced the Tensor cores, which are essential for accelerating the calculations used by the DLSS AI. The deep learning technology was designed to upscale images and video from low resolution in real time to achieve higher frame rates. Gamers weren't impressed at first, but later versions of the technology did perform better in games that supported it. DLSS 4, released last year and tweaked to 4.5 as of January, made significant improvements to detail rendering, reducing motion artifacts, boosting frame rates, and generating more realistic lighting via path tracing (which incorporates interactions with ray-traced lighting). DLSS 5 works a bit differently than previous versions of the technology. According to Nvidia, DLSS 5 shifts from processing simple pixels to understanding 3D elements. By deconstructing characters into specific components -- such as skin, hair and clothing -- the AI can render them more consistently. This results in faster performance and much more realistic details, especially for textures and lighting. Game developers control how DLSS 5 enhances images and to what degree, ensuring it matches the game's aesthetic. The demo video showcased some positive enhancements, but others looked like sweeping changes to the characters and the environment. On Monday, Nvidia released a list of games slated to support DLSS 5: Nvidia has yet to provide a list of GPUs that will support the new technology. In an FAQ, the company says it will release a list of supported cards closer to its release.
[5]
Nvidia's DLSS 5 is like motion smoothing for video games, but worse
Yesterday Nvidia revealed its latest upscaling tech called DLSS 5, which it described as "the company's most significant breakthrough in computer graphics since the debut of real-time ray tracing in 2018." Sounds good, until you actually see it. According to Nvidia, the tech "infuses pixels with photoreal lighting and materials," but all anyone seemed to notice was that it turned recognizable faces into something resembling AI slop. Resident Evil Requiem protagonist Grace got a makeover that would make her look at home in a Tilly Norwood video. The Hogwarts Legacy kids looked like they'd been wrung through an Instagram filter. Even Liverpool captain Virgil van Dijk, a very real and famous person, had his features warped and became just some other dude with DLSS 5. This "significant breakthrough" imbues everything with a particular look that's become synonymous with AI-generated art. It's sort of like motion smoothing, if motion smoothing went a step farther and changed people's faces -- and it's making everything look the same. It's important to note that the next time you play Requiem on a PC, Grace won't suddenly look like she was ripped out of a Grok Imagine demo. DLSS 5 doesn't launch until the fall, it'll require some beefy hardware to operate, and it is an optional feature. But it is a technology that is being pushed by one of the most valuable companies in the world, which has support from major video game developers. And they all seem content with associating their games with a very particular aesthetic. In a statement on Nvidia's announcement blog, Bethesda boss Todd Howard said that "with DLSS 5 the artistic style and detail shine through without being held back by the traditional limits of real-time rendering," while noting that the feature will be available in Starfield. Jun Takeuchi, executive producer on many of Capcom's biggest blockbusters, including Requiem, said that "DLSS 5 represents another important step in pushing visual fidelity forward, helping players become even more immersed in the world of Resident Evil." It's a little strange to hear that some of the most influential names in games have decided that it's cool for Nvidia to replace their carefully crafted characters with generic AI-powered versions. In a follow-up tweet, Bethesda noted that what we're seeing is a "very early look," and that the studio's "art teams will be further adjusting the lighting and final effect to look the way we think works best for each game." So maybe the version of DLSS 5 that's available in the fall will look very different. But what we are seeing now points to a bleak future. AI has infiltrated nearly every aspect of our lives, and one of the most frustrating ways has been on an aesthetic level. AI-generated faces are an amalgamation of countless images, which are then used to spit out a sort of homogenized ideal. It's typically easy to identify thanks to a handful of telltale signs: unnaturally smooth skin and uniform features, perpetually cheerful eyes, a smiling mouth with full lips, perfectly styled hair that looks synthetic, small noses, and HDR-style lighting that highlights every contour. On their own, these can be typical facial features, but when every AI face has them all or most of them, we start veering into the Uncanny Valley. That's why so many people reacted strongly to the faces in Nvidia's announcement: They don't just look bad, they look the same as everything else. That same aesthetic is prevalent everywhere from Instagram feeds to YouTube thumbnails, and it's been inching its way from social networks to more traditional forms of entertainment and culture. I've yet to see a good AI-generated film, and yet they keep coming, and you can identify them from a single screen. Nvidia's new tech is the most visible example of that aesthetic infiltrating games. There are a number of reasons why seeing AI mangle an artist's work is troublesome for games in particular. The industry has been ravaged by layoffs and studio closures following some very expensive misplaced bets and a post-pandemic slowdown, so the potential for replacing human work with slop doesn't sit well. It's also a medium where a subset of the audience has some very backward ideas about what a normal human woman looks like, so making existing characters somehow both more generic and more cartoonish through an AI tool is extremely problematic. Since the announcement, there have been a number of indie developers who have come out to lambast DLSS 5 through memes and more explicitly negative statements. And while a large percentage of developers seem to be against the idea of using generative AI in their games, with some smaller studios going so far as to use "AI free" as a marketing label, it's also true that many of those in decision-making positions at larger developers don't feel the same way. That's why we have the likes of Howard and Takeuchi espousing this technology while at the same time showing a demo that people can't stop making fun of. Grace's face rendered through DLSS 5 is an early vision of what things could look like if the adoption becomes more widespread. And if that happens, being a good friend might mean turning it off when you visit, just like motion smoothing.
[6]
Nvidia CEO says he's 'empathic' to DLSS 5 concerns -- Jensen Huang doubles down on defense while decrying 'AI slop'
More of the same talking points, but it seems Nvidia has seen the backlash. Nvidia CEO Jensen Huang is conceding to backlash, following comments the executive made at GTC 2026 where he said gamers were "completely wrong" about their criticisms of DLSS 5, as first reported by Tom's Hardware. During a recent appearance on the Lex Fridman podcast, Huang seemed more sympathetic to the vocal crowd that has framed DLSS 5 as "AI slop." "I think their perspective makes sense," Huang said. "And I could see where they're coming from because I don't love AI slop myself. You know, all of the AI-generated content increasingly looks similar, and they're all beautiful... so I'm empathic toward what they're thinking. That's just not what DLSS 5 is trying to do." Although Huang is striking a more conciliatory tone, much of his response is similar to what we heard at GTC. "The artist determines the geometry, we are completely truthful to the geometry... so every single frame, it enhances, but it doesn't change anything." There was some confusion about how DLSS 5 worked when it was first announced, and although the inner workings of it still aren't clear on a technical level, Huang has said that it isn't a general-purpose generative AI model. He describes it as "content-controlled generative AI." On the other end of the spectrum, Huang also said that it isn't a post-processing filter. The technical details of DLSS 5 live somewhere between that space, and we likely won't know them until later this year when the feature is set to release. "The question about enhancing, DLSS 5... in the future, you could even prompt it. You know, I want it to be a toon shader. I want it to look like this, kind of. You could even give it an example and it would generate in the style of that, all consistent with the artistry, the style, the intent of the artist," Huang continued. "All of that is done for the artist so they can create something that is more beautiful but still in the style that they want." Although the talking points about DLSS 5 remain unchanged, it seems that Huang has at least heard the criticism. "I think that they got the impression that the games are going to come out the way the games are... and then we're going to post-process it. That's not what DLSS is intended to do." Huang also made assertions that DLSS is "integrated" with the artist, and suggested that it would put the power of generative AI in the hands of artists working in game development; although, we've already seen generative AI show up in shipping releases, from Clair Obscur: Expedition 33 to the recently-launched Crimson Desert. Up to this point, DLSS hasn't been a tool developers have much interaction with. It's not post-processing, either, but it comes late in the rendering chain and is largely governed by Nvidia's models and various DLSS presets. Although DLSS 5 looks like it's doing a lot, Huang said that it's just another tool, not an essential feature. "The gamers might also appreciate that, in the last couple of years, we introduced skin shaders to game developers, and many of those games have skin shaders that include sub-surface scattering that makes skin look more skin-like... [DLSS 5] is just one more tool. They can decide what to use," Huang ended the conversation about DLSS 5. Immediately after, without missing a beat, he said 1993's Doom was the most influential video game ever made. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.
[7]
Nvidia's CEO On DLSS 5 Backlash: The Haters Are Wrong
When he's not battling bugs and robots in Helldivers 2, Michael is reporting on AI, satellites, cybersecurity, PCs, and tech policy. Some gamers may hate Nvidia's DLSS 5 and how it uses AI to add ultrarealistic effects to PC games. But the company's CEO, Jensen Huang, says the critics are "wrong." "We created the technology, we didn't create the art," Huang said, describing DLSS 5 as merely a tool game developers can choose to use. In a Q&A with journalists at the company's GTC event, Huang was asked about the initial harsh reception to DLSS 5, which uses a new AI "neural rendering" model to add photorealism to game characters, objects and environments. The company has described it as the biggest leap in gaming graphics in years. However, some consumers have mockingly compared DLSS 5 to AI-powered face filters over social media apps. Others, including IGN, have argued the technology represents a "slap in the face" to the art of video game design. In the Q&A, Huang was specifically asked about how DLSS 5 allegedly creates "worse, homogenous" imagery or imposes Nvidia's view on how games should look. In response, Nvidia's CEO said: "First of all, they're completely wrong." Huang then alluded to how DLSS 5 was designed to understand 3D characters, environments and their motion to add a new level of realism, as opposed to ham-fisted AI face filters. "As I explain very carefully, DLSS 5 infuses controllability of geometry and textures and everything about the game with generative AI. Now, you can still fine-tune the generative AI so that you could make it your artistic style. And so all of that is up to you. We created the technology, we didn't create the art," he said. "DLSS 5, because it's geometry-controlled, it's conditioned by the ground truth of the game, it enhances and adds generative capability to it, but it doesn't change artistic control," he also said. "It's not post-processing at the frame level, it's generative control at the geometry level." "This is very different from generative AI," he later added. "It's content controlled generative AI. That's why we call it neural rendering." Nvidia also emphasized similar points during our hands-on with DLSS 5. Although the technology functions as a proprietary model, the company says game developers will be able to fine-tune the neural rendering effect to their liking. Still, some gamers, including PCMag staff members, say that DLSS 5 seems to go overboard with the AI-added effects, creating hyper-real faces that give uncanny valley vibes. For now, Nvidia noted in a FAQ: "DLSS 5 at GTC is an early preview and the model is still being optimized. We will share these details closer to release in fall 2026." During his Q&A, Huang also mentioned ray-tracing, which Nvidia introduced to RTX 2000 GPUs back in 2018. He noted the enhanced lighting and shadow effects initially faced criticism as well, but the technology experienced improvements and became mainstream in gaming over time. "Everybody poo-pooed it. Everybody said ray-tracing fubar. If we didn't have RTX today, doing full scene path-tracing, computer graphics wouldn't be what it is today," he added.
[8]
Bethesda says it's "further adjusting" the look of Starfield's DLSS 5's visuals following backlash
* Bethesda says it will tweak DLSS 5 lighting and effects in Starfield after fan backlash. * DLSS 5 alters visuals beyond just upscaling, adding an AI sheen that many feel looks like AI slop. * Nvidia says that developers will have full control over DLSS 5 and how it changes their games' visuals. Bethesda says it will work on "further adjusting the lighting and final effect" of Nvidia's controversial DLSS 5 AI-enhanced visuals in Starfield, following backlash from fans that have heavily criticized the AI-slop-like look of characters' faces. Instead of just upscaling a title to a higher resolution like previous versions of the AI tech, DLSS 5 fundamentally changes a title's visuals, adding additional character detail and changing the environment's lighting and color. This shifts the game's visuals well beyond what its original developer envisioned, and in most cases, adds an unnecessary AI sheen to titles that, at least so far, makes them look worse. Nvidia's CEO Jensen Huang has already responded to the backlash, saying that gamers are "wrong" about DLSS 5 and that game developers will have the final say on how their games look with the feature enabled. Whether that statement proves entirely accurate remains to be seen. Nvidia's CEO says gamers are "wrong" about DLSS 5's AI slop backlash Welcome to the gaming industry's AI era Posts 11 By Patrick O'Rourke The studio says the look of DLSS 5 in Starfield is under its "artistic control." Hopefully, the final aesthetic is significantly less intense Bethesda head Todd Howard initially called DLSS 5's impact "amazing" in a Nvidia marketing video focused on the AI feature. Now, in a recent post on X, the studio has pulled back on that initial enthusiasm, saying that, "This is a very early look, and our art teams will be further adjusting the lighting and final effect to look the way we think works best for each game." This means that at least in Starfield, DLSS 5's AI sheen will likely be dialed down to some extent. If Huang's statements about affording developers ample control over DLSS 5 are accurate, hopefully, the feature doesn't look as extreme as it currently does in every title we've seen so far. Subscribe to our newsletter for DLSS 5 insights Get clear, expert context on contentious gaming tech by subscribing to the newsletter. We'll unpack what DLSS 5's changes mean for visuals and artistic control, plus thoughtful coverage of related gaming tech topics. Get Updates By subscribing, you agree to receive newsletter and marketing emails, and accept our Terms of Use and Privacy Policy. You can unsubscribe anytime. Nvidia recently confirmed that DLSS 4.5 with 6x Multi Frame Generation launches on March 31st for RTX 50-series GPUs. Starfield's upcoming Free Lanes Update adds new weapons, quests, points of interest, and a new vehicle to the space exploration title. It also allows players to travel through space rather than just fast-travel, a feature fans have requested since Starfield's release. A new paid expansion called Terran Armada is also coming to the game. Nvidia's DLSS 4.5 with 6x Multi Frame Generation drops at the end of March The more efficient form of frame generation is set to be a game changer. Posts By Patrick O'Rourke
[9]
Jensen Huang says gamers are "completely wrong" about DLSS 5 -- Nvidia CEO responds to DLSS 5 backlash
At a press Q and A with Tom's Hardware at GTC 2026, Nvidia CEO Jensen Huang downplayed criticism of DLSS 5, the company's new use of AI and neural rendering to infer how certain features of games would look if they were more photorealistic. Since the debut of the feature, some critics have vocally complained on social media that the technology is making games look worse, homogenous, or only show Nvidia's view of the world. Much of the criticism has focused on the updated appearances of Resident Evil Requiem's Grace Ashcroft and Leon Kennedy. "Well, first of all, they're completely wrong," Huang said in a Q&A session in response to a question from Tom's Hardware editor-in-chief Paul Alcorn about the critcism. "The reason for that is because, as I have explained very carefully, DLSS 5 fuses controllability of the of geometry and textures and everything about the game with generative AI," Huang continued. He added that developers can still "fine-tune the generative AI" to make it match their style, adding that DLSS 5 adds generative capability to the existing geometry of the game, but that it "doesn't change the artistic control." "It's not post-processing, it's not post-processing at the frame level, it's generative control at the geometry level," he said. Huang also said that developers can try the tool and see how they want to use it, suggesting that it's up to a developer to try to make a "toon shader" or see if the game should be "made of glass." "All of that is in the control -- direct control -- of the game developer," he said. This is very different than generative AI; it's content-control generative AI. That's why we call it neural rendering." We'll see if the vocal gamers who say they dislike what they see change their mind as we see more. DLSS 5 is set to launch in the fall, and there will likely be far more demos of this technology that are more fully baked before then.
[10]
We Tried Nvidia's DLSS 5: Is It Just an AI Image Filter, or the Future of PC Gaming?
DLSS 5 on and off for the game The Elder Scrolls IV: Oblivion Remastered. (Credit: PCMag) Remember the huge graphical leaps we'd get moving from one console generation to the next? Those days are over...or so we thought. Nvidia's newly announced -- and already controversial -- DLSS 5 may be the graphics jump that gamers have been waiting for. At Nvidia's GTC event, I was able to try a demo of DLSS 5, and the technology wowed me several times, making me wonder: Am I staring at the future of gaming graphics? DLSS 5: Promise and Controversy DLSS is best known for using AI models to increase a PC game's frame rate for smoother gameplay. But on Monday, Nvidia introduced DLSS 5, which uses a "neural rendering" model to add photorealistic effects. The company has been quietly developing the technology for over three years, and the result can make game characters feel startlingly alive by injecting even more shadows, textures, and definition over faces, clothes, and environments, creating a new sense of depth. But despite the improvements, Nvidia's DLSS 5 announcement has already received some backlash over concerns that the GPU maker is merely adding an Instagram-like image filter over game-character faces. Another criticism is that DLSS 5 is acting like an AI slop generator and allegedly forcing AI imagery on top of carefully crafted characters created by game developers. Those worries were on my mind as Nvidia gave me a closer look at DLSS 5 on Monday. But as I saw the technology in action, it also became clear to me DLSS 5 could take computer graphics to a whole new level. Eyes On With the Next DLSS One thing is clear: DLSS 5 is no simple face filter. Video-game rocks and stones suddenly looked like rocks from real-life. The same was true of trees, water, a medieval castle, the interior of Hogwarts School, and even an espresso machine: DLSS 5 added a new level of photorealism that traditional game rendering had struggled to achieve. Another "wow" moment came when DLSS 5 was activated during a demo of The Elder Scrolls IV: Oblivion Remastered. Characters originally modeled two decades ago instantly began to look more like real people; their odd "potato" faces had vanished, replaced by fully fleshed-out faces with photorealistic hair, skin, eyes, and clothes. Sure, I was not seeing the ugly, but charming, facial models from before. But in return, DLSS 5 unlocked a new level of immersion. In Assassin's Creed Shadows, DLSS 5 turned the game's lush forests into something that seemed organic. The effects added even more variety to the light and texture of the dense foliage and rocks, making the depicted landscape indistinguishable from real-world photography. Crucially, the experience didn't feel fake or forced. Nor did it act like a conventional AI image filter, which can alter a person's face, but in a clumsy, heavy-handed way that masks over the original. Nvidia points to how DLSS 5's neural rendering is designed to understand 3D characters and objects, including colors, hair, fabric, skin, and movement, in addition to the surrounding environment. In other words, the technology is supposed to preserve game models before enhancing them. The Power Question: What Will It Take to Run DLSS 5? That all said, DLSS 5 remains a work-in-progress. In fact, Nvidia demoed the technology using not one but two GeForce RTX 5090 graphics cards -- each starting at $1,999, though actual pricing has since risen far higher due to the ongoing memory shortage. The goal is to optimize DLSS 5 so it can run on a single GPU. But in the demo I saw at GTC, one RTX 5090 card was used to render the game, while the other added neural rendering effects. That suggests DLSS 5 will need major tweaks to make it practical for the company's graphics cards. Nvidia will need to move quickly on its optimizations, since it plans to launch the technology this fall. We wouldn't be surprised if the initial launch is limited in scope. Other big unknowns include whether DLSS 5 will introduce a major performance hit -- a potential irony, considering that DLSS was developed to boost frame rates on sometimes underpowered hardware. How DLSS performs over an entire game is another major question. I was only able to briefly try out the feature in both Hogwarts Legacy and Oblivion, and my hands-on session was limited to walking around a single scene, rather than battling enemies or throwing magic spells. Understandably, some gamers may be skeptical or even alarmed, given the ethical issues and legal battles surrounding generative AI. At the same time, the PC market is reeling from an AI-driven memory shortage that risks undercutting DLSS 5 by inflating the cost of admission to buy Nvidia GPUs. But putting all that aside, I have to say DLSS 5 displayed the most realistic gaming graphics I've ever seen -- and I'm looking forward to experiencing more. Once you see DLSS 5 in action, it's hard to deny the potential it holds. In the meantime, Nvidia is already responding to some of the backlash, explaining that game developers will have full artistic control over DLSS 5 and can fine-tune the model to their liking. Some major developers, including Bethesda, Capcom, and Ubisoft, are already on board and preparing to support the technology in their own games.
[11]
Nvidia's DLSS 5 is the (glossy) subject of memes and backlash from gamers
Upgraded graphics in video games sound like they would be popular amongst players and enthusiasts, but Nvidia is finding that the opposite appears to be true with its latest tech. The company announced at a conference Monday that the new version of its artificial intelligence technology designed to boost performance will enable game developers to deliver "photoreal computer graphics previously only achieved in Hollywood visual effects." DLSS (Deep Learning Super Sampling) is Nvidia's image enhancement technology. First released in 2018, the technology was initially used to upscale resolution, but now it can generate entirely new frames. It has been integrated in over 750 games, according to the company. DLSS 5 will arrive this fall, but Nvidia presented a sample of what games will look like with the new technology earlier this week. So why and how are gamers mad about what the company calls its "most significant breakthrough in computer graphics" in recent years? Because the examples it used in a presentation look -- as those on the internet would say -- "yassified," or heavily edited in an attempt to look more attractive, often to the point of comedy. A brief clip of the character Grace Ashcroft from Resident Evil Requiem shows a before and after shot of her with DLSS 5 both off and on, and the difference is significant. While some aspects of the image vastly improve its appearance, like enhanced details and texture in the background, the character's face looks significantly different. With DLSS 5, she has more plump lips, the bags under her eyes don't appear as dark and it looks like she's wearing makeup. As the sample clips continue, more characters from Hogwarts Legacy, Starfield and EA Sports FC get similar treatment. "DLSS 5 is the GPT moment for graphics -- blending handcrafted rendering with generative AI to deliver a dramatic leap in visual realism while preserving the control artists need for creative expression," Nvidia founder and CEO Jensen Huang said in a release Monday. Many gamers responded with criticism, with one commenter on YouTube writing, "The obsession with fidelity over art direction is reaching terminal levels." Some felt the technology undermines artistic intent from the game designers, changing lighting choices and facial features instead of simply enhancing the image. Some also said the clips with DLSS 5 on had a general uncanny feeling, featuring hallmarks of AI-generated imagery. Others responded with memes. One person posted side-by-side images on X of the famous Great Depression-era photograph "Migrant Mother" with a heavily edited version where the once despondent woman is flashing a bright smile and sporting heavy makeup. The text reads "Nvidia presents DLSS 5." DLSS 5 has prompted a new meme template, which other posts have followed a similar format. Two similar images appear, with one clearly edited, and often have accompanying text like "DLSS 5 off vs. DLSS 5 on." One post uses the popular and often-memed image of actor Kevin James and juxtaposes it next to a version of the photo in which his face looks entirely different. Others take images that were clearly designed to be in an animated, cartoon-like style and put them next to an unsettlingly realistic version of that image. In a comment pinned to its YouTube video showcasing what the technology can do, Nvidia said "game developers have full, detailed artistic control over DLSS 5's effects to ensure they maintain their game's unique aesthetic." Huang also responded to the criticism during a press Q&A on Tuesday, saying that critics are "completely wrong." "The reason for that is because, as I have explained very carefully, DLSS 5 fuses controllability of the geometry and textures and everything about the game with generative AI," he said in response to a question about the criticism from Tom's Hardware. Developers can still "fine-tune the generative AI" to make it match their style, he said, adding that DLSS 5 "doesn't change the artistic control." Nvidia said DLSS 5 will come to games including Assassin's Creed Shadows, Delta Force, Justice, Phantom Blade Zero, Sea of Remnants and several other titles when it arrives in the fall.
[12]
Gamers are right to be disgusted by NVIDIA's DLSS 5
You can sum up the gamer response to NVIDIA's DLSS 5 announcement with the ever-relevant Fallout 4 meme: "Everyone disliked that." Across social media and Reddit last night, I couldn't find anyone who's genuinely positive about the potential for DLSS 5, which uses AI to add "photorealistic" lighting and materials to in-game models and environments. Instead, it's mostly complaints about the feature being another avenue for AI slop. And you know what? I agree. It's not unusual to see gamers being reflexively angry about new technology on the internet, especially when it's being pitched by NVIDIA as the "biggest breakthrough in computer graphics" since its RTX 20-series GPUs arrived in 2018 with real-time ray tracing. There was plenty of suspicion around DLSS's original AI upscaling model, as well as the "fake" frames generated by later iterations. But the few demos we've seen of DLSS 5 basically look like "yassified" AI filters for popular games. Leon and Grace from Resident Evil: Requiem have more distinct facial and hair detail, but they look a bit too slick. There are more wrinkles on an old woman in Hogwarts Legacy. And the face, hair and clothing from a Starfield character gain an uncanny sheen. None of the demos have the immediate impact of the Star Wars real-time ray tracing short ILMxLab produced with NVIDIA seven years ago. That demonstration showed us glorious reflections and lighting effects we'd never seen before in real-time. The DLSS 5 demos, on the other hand, don't look much different from the AI filters that make you look more presentable for Zoom calls. There's no genuine excitement for DLSS 5, just NVIDIA telling us that it's groundbreaking. There's also plenty of concern about DLSS 5 straying from an artist's original intent, as well as a potential homogenization of game visuals if every developer starts using the feature. NVIDIA claims developers will have "detailed controls for intensity, color grading and masking," which will help DLSS 5 stay in line with a game's aesthetic. But we don't have any direct developer experience with the feature yet -- some artists may want far more control than NVIDIA wants to give. The difference between DLSS 5 and earlier versions NVIDIA's upscaling is like the difference between generative AI and more traditional machine learning models. NVIDIA relied on the latter to make low-resolution textures and models appear sharper, and later to insert generated frames to smooth out gameplay and raise your fps count. As Wirecutter and former Polygon editor Arthuer Gies points out, you could argue those features were in service of delivering what developers originally intended. But DLSS 5's neural model applies its concept of "photorealism" on top of what games are rendering -- it's like watching a Pixar movie that let OpenAI's Sora do a final visual pass. Part of the negative response towards DLSS 5 may stem from a widespread anti-gen AI sentiment, but that doesn't devalue the criticisms either. Similar to AI generated text, images and video, there's a dehumanizing aspect about DLSS 5. It can erase the work of human artists (despite how much control NVIDIA claims they have), and it also feels like a calculated attempt to appeal to gamers who just want shinier graphics. NVIDIA showed off how generative AI could be used to create dialog and voices for NPCs last year at CES, but that was also widely disliked (and I called it a genuine nightmare). Of course, I can't fully judge DLSS 5 until I see it in action beyond a short demo. But I think the visceral disgust is an important indicator that many gamers aren't onboard with the AI-powered future NVIDIA is trying to sell us. And perhaps the idea of chasing "photorealism" may be a bit of a fool's errand. It may be appropriate for some games, but as Nintendo and indie PC devs have shown, you can also make some of the best games of all time without striving for realism. Tears of the Kingdom could use a better framerate and higher resolution textures, but it certainly doesn't need DLSS 5.
[13]
Nvidia CEO Jensen Huang says gamers calling DLSS 5 AI slop are "completely wrong"
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. A hot potato: Do you think DLSS 5 makes games resemble AI slop? That it's little more than an AI filter inserted into titles that neither wanted nor needed it? If you are one of these many people, Nvidia boss Jensen Huang wants you to know that you're "completely wrong." Few technologies have faced as much criticism upon their reveals as DLSS 5. The vast majority of gamers are aghast at the way it gives characters a typical AI-generated uncanny valley appearance - especially Resident Evil Requiem's Grace Ashcroft and Leon Kennedy - as opposed to the "photorealistic" look Nvidia claims. Tom's Hardware editor-in-chief Paul Alcorn asked Huang about the criticism in a recently Q&A session. "Well, first of all, they're completely wrong," Huang said. "The reason for that is because, as I have explained very carefully, DLSS 5 fuses controllability of the of geometry and textures and everything about the game with generative AI." Huang then repeated Nvidia's previous disclaimer: that developers have full, detailed artistic control over DLSS 5's effects to ensure they maintain their game's unique aesthetic, and that they can "fine tune the generative AI" to match the intended style. "It's not post-processing, it's not post-processing at the frame level, it's generative control at the geometry level," he said. Huang added that it's up to developers to use DLSS 5 however they like. "This is very different than generative AI; it's content-control generative AI. That's why we call it neural rendering," he concluded. We recently published Ryan Shrout's post on DLSS 5. The analyst saw the technology running and was very impressed, emphasizing that it's not just a face filter but a tool that amplifies good rendering. Also read: I saw DLSS 5 running across multiple games. It's not a face filter. Despite Huang's explanations, everyone has their own opinion on DLSS 5. An interesting Reddit post argues its biggest problem is not that it looks "too AI" or betrays artistic intent, but that its tone mapping is overly aggressive, creating an ugly, overcooked HDR effect that distorts lighting, color, and mood. User Veedrac argues that the relighting underneath is often genuinely strong, but that DLSS 5's heavy-handed HDR-style processing muddies the result. By blending DLSS 5 with elements of the original image - such as restoring some of the original saturation, lightness, and darker tones - the post says it is possible to keep the improved lighting while making scenes look more natural and faithful. DLSS 5 - Fixing it in post by u/Veedrac in hardware After comparing several images, Veedrac concludes that many of DLSS 5's gains are real, but are being overshadowed by bad tone mapping that Nvidia should dial back. This isn't the first time Huang has clapped back at critics of something his company is heavily invested in. In January, the CEO said the relentless negativity around AI is hurting society and has "done a lot of damage."
[14]
Nvidia faces backlash over 'breakthrough' AI graphics feature
A new feature from chip-maker Nvidia that promises cinematic-quality graphics using AI has prompted a backlash online, despite the company claiming it would "reinvent" what is possible in video games. Nvidia said the DLSS 5 tool, which will be rolled out this autumn, would allow games to have "photoreal computer graphics previously only achieved in Hollywood visual effects". In images shared with the media, the tech was shown radically changing the appearance of characters and environments in games such as Resident Evil Requiem and Hogwarts Legacy. But some industry professionals said its use of AI went too far, making graphics feel airbrushed and hollow. "Clearly this is a massive glow-up for environments," said video game critic Alex Donaldson on Bluesky. "The character stuff is uncanny & weird tho, & it feels like artistic expression risks being squeezed out." Jeff Talbot, a concept artist at Gunfire Games, posted: "This is NOT the direction games should be going in. Each DLSS 5 shot looked worse and had less character than the original." Nvidia has become a household name because of the advanced microchips it makes for AI data centres. But the firm was originally focused on gaming and is still a driving force of innovation in the industry. Unveiling DLSS 5 at its main annual conference in Silicon Valley, Nvidia said the tech was the company's most "significant breakthrough" in computer graphics since it introduced real-time ray tracing in 2018. Ray tracing is a rendering technique that transforms the look of light, shadows and reflections in games. The tech giant said DLSS 5 would use AI to generate "photoreal" graphics of things like hair, fabric and skin, along with more realistic environmental lighting conditions. "We are reinventing computer graphics once again," said Nvidia boss Jensen Huang. "DLSS 5 is the [Chat]GPT moment for graphics - blending hand-crafted rendering with generative AI to deliver a dramatic leap in visual realism while preserving the control artists need for creative expression." Nvidia said the tech was supported by major publishers and game developers including Bethesda, CAPCOM and Warner Bros. Games. It comes amid growing anger among pockets of gaming community about the increased use of AI-generated content in titles, which has resulted in some studios scrapping games or promising to limit their use of the technology. Running With Scissors, the publisher behind the Postal shooter franchise, pulled a forthcoming game after critics said it had used AI-generated graphics. The role-playing game Clair Obscur: Expedition 33 won Game of the Year at the Indie Game Awards, but was then disqualified after it emerged its developer had experimented with AI-generated images but ultimately not used them. Some, however, have defended AI content, arguing it is pushing the industry forwards. Charlie Guillemot, joint chief executive of Vantage Studios, which makes Assassin's Creed Shadows, said DLSS 5 would make the game feel more immersive. "The way it renders lighting, materials and characters changes what we can promise to players. On Assassin's Creed Shadows, it's letting us build the kind of worlds we've always wanted to."
[15]
Nvidia's AI tech for improving game graphics still has some growing up to do
DLSS 5 doesn't always look bad, and developers can control intensity and color grading to ensure the results match their vision for their games Nvidia's DLSS is a clutch of machine learning-powered image rendering technologies that come in handy for boosting the frame rate in your games and improving lighting and image quality. They use the processing power of graphics cards to make this happen on your computer. The latest version, DLSS 5, is described as a neural rendering model that "infuses pixels with photoreal lighting and materials," and Nvidia says it's the company's biggest advancement in computer graphics since it first cracked ray tracing (a technique for realistically simulating light in 3D scenes) back in 2018. "DLSS 5 is the GPT moment for graphics - blending handcrafted rendering with generative AI to deliver a dramatic leap in visual realism while preserving the control artists need for creative expression," said Jensen Huang, Nvidia's founder and CEO. You can see a demo of this vision in the brief video below: That's a lot to take in, in just over a minute. Previous versions of DLSS made great advances in improving gaming experiences for players by leveraging machine learning to eke out more performance from the hardware you already have. Most recently, we got multi frame generation in DLSS 4.5, which essentially conjured up to five additional frames for every one frame your GPU could render in fractions of a second, making for massive gains in smoothness and responsiveness. This is the sort of breakthrough that people would typically embrace with open arms, particularly when RAM and GPU prices are through the roof to the point of becoming inaccessible for most gamers looking to upgrade their rigs. With DLSS 5, Nvidia's doing something different. It uses an AI model to tack on lighting and materials to enhance frames, while maintaining consistency even at 4K resolution so you don't have weird artifacts hovering around on screen. Nvidia says it understands that game tech should help precisely represent the developer's artistic intent - but the implementation seems to be veering away from that drastically enough to draw plenty of negative reactions (check out Kotaku's coverage, and head to the comments in Digital Foundry's analysis). You can see the AI slop effect in the overtly sharpened facial features in the clips of Starfield and Resident Evil Requiem above, and even in Hogwarts Legacy below. There are some bits that don't look all that awful, like lamp posts and cauldrons close to the foreground - but what you see are imagined details from a bunch of algorithms superimposed onto painstakingly created elements to look 'better.' What seems to be happening is that we're seeing a lot of unnatural details that are detracting from the experience. It also appears to add wholly different treatments to certain scenes: grim environments can end up seeming a lot brighter and less foreboding to walk through. All that said, it's worth noting that developers will have control over intensity, color grading and masking applied in their titles with DLSS 5 switched on, which means they'll have a say in how their games end up looking with AI-generated enhancements. The company also notes it'll only arrive this (Northern Hemisphere) fall, which gives it - and the studios behind DLSS 5-supporting titles like Starfield, Assassin's Creed Shadows, and Where Winds Meet - time to gather and respond to players' feedback before rolling the tech out widely. Hopefully they're all listening.
[16]
Does Nvidia's Slop-ified DLSS 5 Game Lighting Make Any Sense?
Nvidia finally reveals what's actually going on with its upscaler, and it's much less impressive than it first seemed. Nvidia’s newfangled AI-based “neural rendering†technology, DLSS 5, dramatically modifies (aka "slop-ifies") game visuals with the help of AI. Nvidia promised this technology would revolutionize in-game lighting at its most fundamental level. After hearing from lighting experts, developers, and Nvidia itself, it turns out that DLSS 5 isn’t nearly as impressive as all that. Both gamers and game developers were none too happy about DLSS 5 “slop-ifying†faces in games like Resident Evil Requiem, Starfield, and Hogwarts Legacy. For its part, Nvidia was adamant this technology was not modifying characters at the "geometry level.†On Thursday, YouTuber Daniel Owen went back and forth with Nvidia’s own “GeForce Evangelist†Jacob Freeman about what’s going on inside DLSS 5. To summarize, it’s a sophisticated, ultra-fast, AI image generator. Freeman confirmed DLSS 5 is only analyzing a "2D frame plus motion vectors as input†and sticking that information into a generative AI model. Essentially, the AI is only analyzing the surface information of each rendered frame and using motion vectors to determine what the next frame will look like. These motion vectors are data points used to determine the difference between where in-game objects are positioned from frame to frame. Nvidia’s AI is using these vectors to guesstimate the position of these to make motion look smoother. The AI doesn’t have access to intrinsic scene information, such as PBR (physically based rendering). PBR helps inform a game engine of what materials are in a scene, such as wood, metal, etc. Instead, with DLSS 5, "materials are inferred from the rendered frame.†The AI merely guesses what each object should look like, then papers over the frame with its own interpretation. Changing a scene's textures and lighting may also disrupt how players perceive the environment. In a follow-up video posted by Digital Foundry, the AI reinterprets two ceramic-seeming cups on a table and decides they are completely made of metal. Gizmodo reached out to Nvidia to confirm our interpretations of Freeman’s comments. We’ll update this story if we hear back. It's likely inaccurate to say the AI is interpreting a "screenshot" of each scene, as Owen put it in his video. However, it remains unclear at what part of the rendering stage the AI is taking its information from. “This is a very early preview of the tech,†Freeman responded when asked whether the AI was capable of reinterpreting a scene. In a Q&A hosted at GTC 2026, Nvidia CEO Jensen Huang was adamant that the technology doesn’t result in homogenized graphics in games. He then went on to describe the technology in vague terms that don’t mean much or anything to the wider gaming public. “DLSS 5 fuses the controllability of the geometry and textures and everything about the game with generative AI,†Huang is recorded saying at the Q&A. “It’s not post-processing; it’s not post-processing at the frame level; it’s generative control at the geometry level.†The Nvidia CEO has a tendency to fall into technobabble to detail his company’s products. He initially described DLSS 5 and its “neural rendering†technology as “combined 3D graphicsâ€"structured dataâ€"with generative AIâ€"probabilistic computing.†Huang’s comments seem to fly directly in the face of Freeman’s description. DLSS 5 is not remapping new textures and lighting onto 3D objects. It’s spitting out AI-generated 2D images at such a rapid rate that it can keep up with a game’s frame rate. As with all generative techniques in games such as multi-frame gen, this will likely increase latency and lead to odd artifacts within scenes, especially with objects in motion. DLSS 5 lighting may look more “realistic†in a sense that it is reflecting accurately off of objects in a scene. The issue is that the AI will not understand the artistic direction behind each moment in a game. The one thing missing from this debate is whether the lighting makes any sense outside of programming and gaming circles. Gizmodo reached out to Jonathan Harris, a local independent photographer and filmmaker based in New York, to get his take on DLSS 5's realism. “In some cases, the lighting does look better, but at the same time, you lose the creative edge,†Harris said. “It seems like it's optimizing light.†Gizmodo showed the indie filmmaker two screenshots from environmental lighting in one of Nvidia’s demo scenes. He pointed out that the one with DLSS 5 enabled looked as if it were trying to resemble an overcast sky. The AI even added clouds that weren’t there in the original frame. Left, one of Nvidia's demo scenes with DLSS 5 on; right, the same scene with DLSS 5 off. © Nvidia “Things do seem to look sharper, which can be nice, but in a game like Resident Evil, which is supposed to have a little bit of fog and haze, the lighting overall is just a bit brighterâ€| and that may work against the tone of the game,†Harris said. Nvidia said in its FAQ that game devs will have the ability to change color grading and mask off parts of scenes. This means creators could keep the AI from modifying the light and texture on specific characters or objects in a frame. One Redditor pointed out that the issue with Nvidia’s original DLSS 5 images may be more due to the AI’s awful tone mapping. By toning down the overbaked colors in the original screenshots, the characters seem to fit much better with the original games. Other than color grading, developers may not have control over whether the AI can add elements to a scene that weren’t there before. In a thread on X, longtime game dev and ex-Ubisoft programmer Muhammad Moniem said “it is an AI trained post-processing filter.†While he stopped short of calling it “slop,†he also said, “This is not an Instagram filter by any mean [sic]! This is a TikTok filter.†Let’s get one thing straight. There’s a reason why many of these DLSS 5-modified characters look horrendous. They all bear too much resemblance to AI-generated Instagram and TikTok videos that have flooded users’ feeds. Whether you actually wanted a "photorealistic" game or not, these AI-infused frames miss the mark by miles. While we’ve seen hands-on from some outlets like Digital Foundry and PC Guide, most of the footage was cut up and spliced. Digital Foundry’s original video of the tech showcased a few static environments and character models. The only extended video of DLSS 5 we’ve seen comes from the YouTube channel HotHardware. The problem with this footage is that Nvidia seems to restrict people from seeing the lighting effects when the characters are in motion. The Nvidia staff will turn off DLSS 5, move to another end of the room in games like Hogwarts Legacy and Starfield, then turn the upscaler back on. Nvidia's staff said that the model is getting information “from what’s on screen and what’s moving on screen.†What’s still unclear is how fast it will be able to do this when the player’s camera is moving as well. Nvidia showed DLSS 5 to an extremely limited press pool, and it's done a very poor job explaining how the technology even works. The game and DLSS 5 model were running on two Nvidia GeForce RTX 5090 GPUs. One graphics card was dedicated to running the AI model exclusively. Nvidia told multiple outlets it was already testing DLSS 5 running on a single GPU. Despite all these lingering questions we have about whether it will look good or even run on any but the highest-end GPUs, Nvidia still intends to launch DLSS 5 this fall. The wider developer community seems much less on board. “A lot of these companies take the time to high-res 3D scan these people’s faces," said Mike York, an animator who has worked on games like Red Dead Redemption 2 and GTA V, in a YouTube video. “They want to get those original textures. They want to get that little scar underneath the eyeâ€| [DLSS 5 is] squashing over somebody's hard work.†Our friends at Kotaku quoted a whole assortment of game developers who do not like DLSS 5 or what it represents for the industry. Nvidia’s CEO showed this technology off at its own AI-centric GTC conference, even though it had an opportunity to bring it to developers at GDC (Game Developers Conference) 2026 that took place just a few weeks prior. Nvidia was so focused on AI, it ignored everything that the gaming community actually wants from today's games. It was never about achieving "photorealism." It was about creating an experience that matters, that impacts the players, and that adds something to the world. You knowâ€"art.
[17]
Nvidia's CEO says gamers are "wrong" about DLSS 5's AI slop backlash
Patrick O'Rourke is XDA's News Editor and Entertainment Segment Lead. Previously, he was Pocket-lint's Editor-in-Chief, the Editor-in-Chief of Canadian tech publication MobileSyrup, and earlier in his career, he worked as the technology editor at the Financial Post and Postmedia. He's based in Toronto. Over the past 15 years, he's written thousands of articles. Patrick has also interviewed dozens of tech industry executives and covered GDC, E3, Gamescom, WWDC, Apple keynotes, Samsung Unpacked events and more. Patrick has a BA in journalism from Toronto Metropolitan University. Summary DLSS 5 uses generative AI to change geometry, textures, and lighting -- not just upscaled visuals. Nvidia insists developers keep artistic control and can fine-tune DLSS 5's generative effects. Trailer comparisons show an otherworldly AI sheen; many fear DLSS 5 will bring 'AI slop' to games. At GTC 2026, Nvidia announced DLSS 5, a more advanced version of its AI upscaling technology, and the response has been resoundingly negative. Rather than just improving a game's resolution and frame rate with the help of AI, DLSS 5 actually changes a title's graphics, adding detail to characters' hair and faces, and even a game's environment and lighting. The resulting effect... isn't great to put it lightly, adding an unnecessary AI sheen to visuals that many are calling AI slop. Check out the video below, and you'll see what I mean. However, during a recent Q&A event at GTC, Nvidia CEO Jensen Huang explained that this impression is misguided. Taking a question from Tom's Hardware's Paul Alcorn, Huang said, "Well, first of all, they're completely wrong," and reiterated that game developers will have control over how the DLSS 5 is implemented in their titles. "The reason for that is because, as I have explained very carefully, DLSS 5 fuses controllability of the geometry and textures and everything about the game with generative AI," said Huang. He added that developers will be able to "fine-tune the generative AI" to match their style" and and say that it "doesn't change the artistic control." "It's not post-processing, it's not post-processing at the frame level, it's generative control at the geometry level," said Huang. Related Nvidia's DLSS 5 uses AI to "enhance" games with photorealistic lighting and more character detail AI-enhanced visuals are here. Posts 1 By Simon Batt The AI slopification of gaming is coming, whether we like it or not Hopefully, most developers opt for a more low-key version of DLSS 5's AI generation features While this is likely accurate, the aesthetic changes shown in the DLSS 5 trailer Nvidia recently released don't really support that perspective or paint the upcoming AI upscaling technology in a positive light. For example, the Resident Evil Requiem transformation focused on Grace Ashcroft looks positively otherworldly with DLSS 5 on, and the same can be said for most of the comparisons, particularly the one featuring characters and environments in Starfield. It will all come down to what the technology looks like when it launches, and the level of control developers will afford Nvidia over the look of their titles with DLSS 5 on, if Huang's statements about the upcoming AI upscaling feature prove accurate. Either way, I can't help but feel like the wave of AI slop that's progressively infected the internet over the past few months is now coming for the gaming space. Nvidia recently confirmed that DLSS 4.5 with 6x Multi Frame Generation launches on March 31st for RTX 50-series GPUs. Related 385TB of gaming history was nearly lost -- until its community stepped in Myrient has been "100% backed up" and validated Posts By Patrick O'Rourke
[18]
Calling Nvidia's DLSS 5 "AI slop" completely misses the point of modern rendering
Another day, another DLSS version sows discord among the gaming masses. It takes me right back to the release of the first public version of DLSS, and all the surrounding negativity. Yet, today, all but a vocal minority of gamers deny that DLSS has been the killer feature of the generation. DLSS uses machine learning algorithms to intelligently upscale a rendered image so that it appears to be a higher resolution. This solved a major issue that happened because of the shift from CRT monitors (which can display any arbitrary resolution without scaling artifacts) to flat panel displays, which have a physical pixel grid. If you can't render your image at that "native" resolution, you need to apply a scaling solution, and before DLSS they all looked rather terrible. As DLSS matured, it reached the point of providing superior image quality to "native" rendering combined with the popular Temporal Antialiasing (TAA) technique. With DLSS 4.5, NVIDIA polished the final rough edges of the technology. All but fixing issues with image stability and disocclusion artifacts. It was such a big leap that people wondered why it was just a point-five release, but it turns out the answer is way more radical than we could imagine. What is DLSS 5? Imagining a better (game) world DLSS 5 includes a new technology that uses AI to rework the lighting in a game so that it takes a major leap towards photorealism. It does not change the geometry or texture work, but "imagines" what the game would look like if rendered with extremely high-quality light. You can see the examples in action below in NVIDIA's official release video. The end result is simply beyond anything that current hardware can achieve using traditional rendering methods. It pushes these real-time games into the realm of pre-rendered CG. In this preliminary analysis video by Digital Foundry you can see that this is a real implementation in a real game. The results are consistent, and the game runs as normal, it just looks different. Just like real-time ray-tracing technology, this is using neural networks to make something possible that simply cannot be done by brute force at the moment. Now, it's worth noting that this demo is using not one, but two RTX 5090 cards. One to run the game, and one to apply the DLSS pass to the graphics, but obviously the intent is for this technology to run on a single card when it's released to the public. Alienware 16 Area-51 (2025) 9/10 Brand Dell / Alienware Operating System Windows 11 Home (Pro upgradable) $2000 at Dell $3100 at Best Buy Expand Collapse Why are gamers complaining? New thing is scary The reaction to DLSS 5 online has been interesting, to say the least. The most vocal reactions have been negative. The term "AI slop" was thrown around quite liberally as you might expect. However, the key theme here was about "artistic intent." The idea that DLSS 5 changes the art of the game so that it no longer looks the way the game developers intended it to look. I can't argue that it doesn't make the games look different, but the implication that NVIDA just put out a bunch of demos using games by major studios without their consent or approval is laughable. Indeed, not only did studios like Bethesda and Capcom give their blessing, they worked on the DLSS 5 implementation in their games themselves! NVIDIA was also quick to point out in a pinned comment under the announcement video that DLSS 5 wasn't a "filter" or some magic on/off switch: Important to note with this technology advance - game developers have full, detailed artistic control over DLSS 5's effects to ensure they maintain their game's unique aesthetic. The SDK includes things like intensity, color grading and masking off places where the effect shouldn't be applied. It's not a filter - DLSS 5 inputs the game's color and motion vectors for each frame into the model, anchoring the output in the source 3D content. It essentially builds on what we've seen with ray reconstruction and the original denoising process for RTX. It provides another option for the lighting in the game. Another complaint that I saw was the one leveled at generative AI products like Midjourney, where the idea is that the training was based on stolen work. Except that NVIDIA trains DLSS on extremely high-quality real-time rendering of game footage they generate themselves. It's NVIDIA's supercomputer resources that make DLSS what it is, not the GPU we buy that have simple tensor cores to run the resulting algorithms. The results matter more than the methods It's all fake, my dudes Setting all of this scrambling to prejudgment aside, my biggest frustration is with the strange purism that some people in the gaming world express when it comes to how AI is being used. The "fake frames" and "fake pixels" camps railed against DLSS in its initial form. They only want raw, native pixels like nature intended, apparently. Subscribe to the newsletter for DLSS 5 breakdowns Get clearer context - subscribe to the newsletter for in-depth, jargon-free analysis of DLSS 5 and AI-driven graphics, thoughtful takes on developer control and image-fidelity debates, and what it means for modern game visuals. Get Updates By subscribing, you agree to receive newsletter and marketing emails, and accept our Terms of Use and Privacy Policy. You can unsubscribe anytime. This entire framing misses the point that the "real" game rendering technologies they are defending consist of a mountain of kludges and shortcuts game developers have invented over the years to get as close to their intent as possible. What does it matter what sort of math, or what type of processor is responsible for the image on screen? NVIDIA's big DLSS Super Resolution upgrade has arrived A major upscaling update is rolling out to everyone. Posts By Jorge A. Aguilar A computer geek like me might be interested in the nuts and bolts, sure. But as a gamer, I care about the results above all. Does it look good? Does it play well? These are important questions. And I do care about whether I'm getting the intended experience. After all, I'm a guy who buys CRT monitors and TVs so that I can see what the games meant for those displays are supposed to look like. I think perhaps using existing games to showcase DLSS 5 could have been a mistake on NVIDIA's side. It may have been better to show off a game that has been designed from the ground up to have DLSS 5 as part of its rendering solution. Either way, this is a smart solution to achieve a multi-generational leap in visual fidelity. The only question is whether the AMD-powered gaming consoles most people play on will once again feel outdated at launch, just as they did when NVIDIA blindsided AMD with RTX and DLSS at the same time as the launch of the PlayStation 5 and Xbox Series consoles.
[19]
We got a first look at Nvidia's DLSS 5 and the future of neural rendering at GTC -- the results can be impressive, but there's work to do
New AI model can dramatically improve the appearance of games, but it's early days for the tech. Neural rendering -- the use of AI models to create pixels -- is already a familiar concept in real-time graphics. When you're using DLSS upscaling or frame generation, for example, many of the pixels you see are already generated rather than natively shaded, and those extra pixels and frames come at a surprisingly low computational cost via the matrix math acceleration of Nvidia's Tensor Cores. Given that almost magical cost-to-benefit ratio, research is unsurprisingly well under way to expand generative AI techniques beyond relatively transparent applications like upscaling and frame generation to replace portions of the traditional graphics pipeline as we know it today -- and possibly even in its entirety. Nvidia's DLSS 5 reveal at GTC is a startling indication of just how close we are to that future. The RT cores that debuted in Turing nearly eight years ago have brought much more lifelike lighting effects into the realm of real-time graphics, but natively shading even a subset of the pixels in a frame to a Hollywood polish remains well outside the realm of today's graphics hardware. Even the 575W, ~750mm² RTX 5090 isn't up to the task, and further scaling-up of GPU die sizes and power envelopes in pursuit of that lifelike ideal would only make it less accessible. And given AI's crowding-out of leading-edge fab capacity, next-gen gaming GPU silicon seems less and less likely to arrive any time soon. DLSS 5 is the most prominent example so far of how neural rendering offers an alternate way forward compared to brute-force increases in compute resources. Its AI model is trained to infer how certain complex features of game scenes like characters, skin, hair, and environmental lighting "should" look in the real world given certain inputs from the game engine (including, but not limited to, the color buffer and motion vectors) in combination with an input frame. The DLSS 5 model then uses this input data and its semantic understanding of parts of a scene to bring its appearance closer to how it might look in reality while still respecting the artistic intent embedded in environments and character models. Because it's deeply tied to the underlying game engine and assets, DLSS 5's output is consistent and predictable in ways that the prompt-driven and iterative workflow of generative AI imagery and video distinctly aren't. We had an opportunity to preview DLSS 5 in five games at GTC, and for modern games with assets built to match, DLSS 5 unquestionably improved the image quality and fidelity of the small group of titles we saw. (For a group of high-resolution side-by-side comparisons that you can really pixel-peep, check out Nvidia's launch article.) Even in games that already feature real time ray-traced effects, like Hogwarts Legacy, flipping DLSS 5 on and off creates even more convincing lighting effects for environments and characters alike. Hogwarts students standing in front of massive sunlit windows are rendered with convincing rim lighting around the edges of their hair and clothing that's absent with DLSS 5 off. Improved ambient occlusion better darkens every fold of students' robes and every nook, cranny, and corner of Hogwarts itself. Even everyday objects like couches look better situated in scenes thanks to more accurate shadows underneath. I've also spent lots of time recently looking at Assassin's Creed Shadows as I've begun a new round of benchmarking for our GPU Hierarchy, and it's a game where RT plays a huge part in creating a rich and convincing-looking world. Even with Shadows' already impressive RT implementation, DLSS 5 makes the light and shadow playing across the game's forested vistas appear even closer to life, and it straightforwardly corrects minor rendering errors like a character's robe not properly shadowing their leg in a crouch. Those improvements carry over to games like Starfield that never implemented ray tracing to begin with. Flipping on DLSS 5 adds considerably greater sophistication to the appearance of environments and objects (and characters' faces, but more on that in a second). That experience also holds in The Elder Scrolls IV: Oblivion Remastered, where reflections on water become more convincing and environmental features like the spaces under wooden docks and the arches and filigrees of bridges and buildings all look DLSS 5 is also meant to better replicate how light interacts with human hair and skin, and nowhere are its enhancements more dramatic - for better or for worse - than with human faces. Nvidia says that we're not used to seeing faces of this fidelity in real-time rendering, and sometimes, the effect is breathtaking. The visages of the characters in Nvidia's Zorah demo went from looking like "a good video game" to "incredibly lifelike." And the flatly lit and (frankly) dead-eyed characters of Starfield practically come alive with DLSS 5 enabled, transforming into something resembling actual humans rather than aliens wearing human skin suits. But in Oblivion Remastered, which uses character models that still exhibit some of the awkwardness of the 2006 original, the results are more mixed. It's incredible that DLSS 5 can simply infer that flowing hair should create shadows that the game's native lighting model completely fails to cast, but when that same character's facial features are comically exaggerated, rendering their skin and hair with cinematic detail and precision can be more off-putting than immersive. The uncanny valley becomes the uncanny Grand Canyon. And that's where the advent of DLSS 5 and its reception moves into the realm of the philosophical rather than the purely technical. Real-time graphics as a field has relentlessly pursued more photorealistic rendering ever since the advent of the first GPUs, and working in the wide gap between real life and the capabilities of our tech to reproduce it has required considerable artistic skill, taste, and judgment to partially bridge those limitations. If DLSS 5 is going to drastically narrow that gap, carelessly applying it has the potential to produce results that aren't consistent with a game's creative direction, and the inflamed community response to the results of some of Nvidia's demos so far suggests that the company and its game dev partners will need to tread carefully to avoid those pitfalls. Assuming a developer includes DLSS 5 in a title using the existing Streamline SDK, Nvidia says that the model offers controls for color grading, intensity, and masking to fine-tune its overall effect on a game's appearance. Of course, DLSS 5 will be toggle-able just like upscaling and frame gen, so if you're not a fan of its implementation in a particular game, you can just leave it off entirely. And although the company acknowledges that the model could certainly be shoehorned into games by enterprising modders, the results thereafter are purely those folks' responsibility, not devs' or Nvidia's. The final open question for DLSS 5 regards its hardware requirements. The demos we saw were all running on a PC featuring dual RTX 5090s, one to run the game itself and one dedicated to accelerating the model. That's a massive amount of compute, but the company said it hasn't begun performance optimizations on the model yet, so we'll have to withhold judgment on its hardware requirements until later. Nvidia also didn't offer any indication of which of its RTX GPU architectures would be best compatible with DLSS 5, either. All told, this remains an early look, but even at this stage, we're excited and cautiously optimistic for the changes that expanded uses of neural rendering holds for gaming graphics. The fact that DLSS 5 is an AI model means that it can be continually fine-tuned and improved, just as DLSS upscaling has progressed in its capabilities over time. Given that fact, Nvidia will doubtless continue to work internally and with game studios to refine DLSS 5's outputs and requirements as the tech continues to be developed ahead of its launch this fall. The company claims over a dozen games will support DLSS 5 at launch so far, and given the widespread adoption of DLSS tech generally, that number is sure to grow by leaps and bounds. From what we've seen so far, we can't wait to get our hands on it and give it a spin in a wider range of titles. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.
[20]
Hey, Nvidia: DLSS 5 can't make great art with imperfect science
Despite CEO Jensen Huang's defense of the technology as "generative control," the controversy highlights the ongoing tension between technical innovation and preserving authentic artistic expression in gaming. You ever have debates about art? I do. They're messy, particularly the ones about video games. And now Nvidia's thrown more chaos into the mix with DLSS 5. Announced at GTC 2026, this new dynamic lighting technology is described by Nvidia as "real-time neural rendering" that adds "photoreal lighting" to pixels -- on a level allowing game developers to rival Hollywood in creating realistic graphics. The public disagrees, with my colleagues among the vocal opposition. Gamers accuse DLSS 5 of "yassifying" game art, a.k.a. applying beauty filters liberally to characters and environments alike. (As @thedragonbrandy.bsky.social commented on Bluesky, "We went from raytracing to sloptracing.") For his part, fellow PCWorld editor Mark Hachman sees DLSS 5 as "the sprinkling of AI content on top of games, devaluing them in the process." Nvidia CEO Jensen Huang fired back, saying the criticism was "completely wrong." His rebuttal focused on the technical, emphasizing this new tech applies "generative control" to a game's geometry, rather than serving as post-processing. Who's right in this argument? I can't make that call, though I have my sympathies. Both sides have valid points. As they do in virtually every fight about art. Here's my theory for why: No matter the medium, art can be broken into two main pieces -- "science" (that is, technical execution) and emotion. You can't have only one or the other. You need science to provoke a reaction in your audience, to capture them with skills only a master artist can execute. The more varied and refined your technique, the more layered and powerful the response you'll draw out. But you also need emotion to hook an audience, to reach them on a deeper human level. The stronger you can tap these emotional buttons, the more profound the connections you make. Science helps tell the story effectively. Emotion makes people invest in that story. But weaken one and the other goes down with it. That's the heart of this clash, in my eyes. Nvidia portrays itself as aiding game development; DLSS 5 "doesn't change the artistic control" of game studios, according to Huang. Yet that view ignores what gamers see: DLSS 5 isn't working as intended. When science in art is done well, audiences don't notice. Most enjoy the experience and leave the nitpicking of technical details to enthusiasts. Had Nvidia succeeded on this front, DLSS 5 would be viewed more positively, a beneficial perk. (Maybe even one justifying the cost of an expensive graphics card to support the feature.) Instead, gamers are dissecting DLSS 5's execution to the point of questioning its necessity. Some of this may be due to how little is known about DLSS 5's implementation, and how much true control game artists have over its effects. (For example, did someone at Capcom punch up Grace's looks with more makeup and hair lowlights, despite the lack in the original Resident Evil: Requiem scene? Or is that truly DLSS 5's influence?) Still, even with that lack of info, proponents of DLSS 5 would be wise to listen, rather than call critics ignorant idiots. Non-experts in a field don't have the knowledge or the language to concretely explain what they notice. Their descriptions usually come out as feelings or get attributed to incorrect root causes. Despite that, their feedback is still valuable. A distracted audience is not an invested one -- an observation I've made many times when evaluating art myself. And here, the distraction is the very thing DLSS 5 claims to solve. Gamers don't believe what they're seeing is accurate. I personally believe in DLSS 5's goal to elevate the science of making games. But I also agree with fans: That won't happen if the tool isn't better calibrated -- or perhaps better implemented -- by the time it launches. In this episode of The Full Nerd In this episode of The Full Nerd, Adam Patrick Murray, Brad Chacos, Alaina Yee, and Will Smith spar over the announcement of DLSS 5 (and its reception), Microsoft's Project Helix, and Intel Arrow Lake Refresh details. We go a full 90+ minutes on Nvidia's new lighting tech alone -- and we weren't the only chatty ones during the stream. I pulled out some of my favorite comments: @brandontrost8888 dlss5 looks photorealistic. it's definitely not AI slop. Truly game changing @DerxWiedergaenger people are angry because it looks like garbage. It entirely removes shadows, overwrites lighting entirely. Ultimately, it just looks like someone turned on vivid mode on their TV. @puretrack06 Thomas the tank engine mod for all A lot of debate happened with little consensus, but we managed to agree Nvidia should have called this anything but DLSS 5. (Where's the super sampling?) Also, we agree our viewers are pretty great, because they made this image below while we were talking. Missed our live show? Subscribe now to The Full Nerd Network YouTube channel, and activate notifications. We also answer viewer questions in real time! Don't miss out on our other shows too -- you can catch episodes of Dual Boot Diaries, The Full Nerd: Extra Edition, and Expedition: Handheld through our channel! And if you need more hardware talk during the rest of the week, come join our Discord community -- it's full of cool, laid-back nerds. This week's broad nerd news DLSS 5 may have enthusiasts in a tizzy, but plenty hit the news stream to catch attention -- or inspire shock. For me, I'm surprised about the Apple MacBook Neo's reparability. Far less so when it comes to AI creating big headaches...or software getting exploited. * Up is down, down is up: Apple got high marks on YouTube for the MacBook Neo's reparability. Truly, we're living in an alternative universe right now. * A surprise to no one: OpenAI is allegedly pushing forward with its adult mode for ChatGPT, against the opinion of its advisors. I'll let Ars Technica commenter s73v3r speak for me: "Why are we allowing a product that is so dangerous on the market? We outlawed Lawn Darts for less." * Browser extension danger: My colleague Michael Crider discovered a Chrome extension had been stealing his data for years. * Better living through science: Researchers believe they may be able to pinpoint six different types of depression through functional MRI (fMRI) imaging. Practical application of the knowledge may be slower to follow, but perhaps eventually people will have to suffer less to find the right treatment. * Brown town: Noctua fans, your time has come. No need to be subtle with your love for chocolate brown -- you can now stuff your whole PC into a case with that aesthetic. * Is Minority Report our future? Jailed for almost six months due to faulty AI facial recognition today. Perhaps jailed for thinking of jaywalking tomorrow. * Vintage vibes, modern performance: I loved this project that PCWorld contributor Jared Newman undertook -- creating your own personal radio station to pair with an old radio. (Warning, this article commits mild violence by reminding us 1976 tech is now vintage.) * Reboot regularly? If DarkSword-style exploits become more common, I may stop focusing on persistent uptime with my devices. * That's not the Newegg I know: Major congratulations to this shopper, who scored an accidental 91 percent discount on a full system build due to a pricing error. And got away with it. * The right kind of shade: Intel has partnered with Microsoft to launch its Intel Graphics Shader Distribution Service, which stores precompiled shaders in the cloud to speed up load time in games -- up to 37x. * Ancient band-aids: Pretty cool to realize what alternatives exist to today's commodities -- and likely how they even came about as solutions. As much as I've talked about DLSS 5 this week, I actually could debate it even further. I'm very interested to see what we learn about this tech in the future -- and how public conversations help shape its direction. Catch you all next week! Alaina This newsletter is dedicated to the memory of Gordon Mah Ung, founder and host of The Full Nerd, and executive editor of hardware at PCWorld.
[21]
I saw DLSS 5 running across multiple games. It's not a face filter.
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. Sounding off: I went hands-on with Nvidia's DLSS 5 across multiple games at GTC and the "it's just a face filter" isn't the right take. The improvements to shadows, water, foliage, clothing, and even a coffee maker in Starfield are just as impressive as the character enhancements. Here are my full impressions including some details on the dual-GPU demo setup and the developer control story that I think matters quite a bit. Nvidia dropped DLSS 5 at GTC 2026 this week, and the internet already has opinions. Ryan Shrout is a longtime technology analyst and industry veteran who has spent over two decades covering PC hardware, graphics, and semiconductors. He previously led technical marketing at Intel and was the founding editor of PC Perspective. He is currently President and GM at Signal65. You can follow him on X @ryanshrout. I was in the room and I went hands-on. Not watching a sizzle reel, not scrubbing through a carefully curated 30-second trailer, but sitting in front of multiple games with DLSS 5 toggling on and off in real time. Hogwarts Legacy. Starfield. Assassin's Creed Shadows. Oblivion Remastered. The Zorah tech demo. The visual improvements are significant. Not incremental. Significant. But if you've been scrolling social media, you'd think Nvidia just shipped an Instagram beauty filter for video games. And I get why that's the first reaction. But it misses the true picture by a wide margin. Why Faces Get All the Attention We've had photorealistic environments in games for a while now. Water reflections, volumetric lighting, incredibly detailed cityscapes and forests. The hardware and the rendering techniques have gotten us to a place where environments can look stunning under the right conditions. But faces have been the holdout. Getting a human face to look truly photorealistic in real time has been one of the most expensive problems in computer graphics from a compute standpoint. Subsurface scattering on skin, the way light interacts with individual strands of hair, the micro-expressions that make a character feel alive rather than like a wax figure. All of that requires an enormous amount of rendering horsepower.. I've probably seen ten different "floating head" tech demos over the course of my career. That's not an exaggeration. They're always a single head with no hair, no body, no environment, because rendering a photorealistic face at that level of quality is so expensive that it can only be done in isolation. You never see it inside an actual game, because the performance budget won't allow it. Note: these are photos taken of a screen, so expect some glare/lighting impact. Here's high-res demo of a similar scene from Nvidia's website. DLSS 5 closes that gap in a pretty dramatic way. And because that's the area where the delta between "before" and "after" is most visible, that's what everyone is reacting to. The Nvidia team put it well during my demo. It's a psychological effect. You've seen environments rendered really well before. When you suddenly see a character rendered at that same photorealistic level, your brain flags it immediately. It stands out. Note: these are photos taken of a screen, so expect some glare/lighting impact. Fair enough. But focusing only on the faces is wrong. It's Happening Everywhere, Not Just on Character Models What I saw in the demos was a comprehensive improvement across the entire scene. And the moment that really drove this home wasn't a face. It was a coffee maker. In Starfield, there's a countertop scene with a coffee machine, some paper towels, a cup, napkin holders. Standard environmental clutter. With DLSS 5 off, everything looks flat. The coffee maker fades into the background. Toggle it on, and suddenly the objects have shape. The lighting wraps around them naturally. The spatial relationships between the items and the surfaces they're sitting on become clear. It goes from "assets placed in a scene" to "objects that actually belong in a room." Note: these are photos taken of a screen, so expect some glare/lighting impact. Here's high-res demo of a similar scene from Nvidia's website. The same thing played out across every title. In Oblivion Remastered, the water went from good video game water to something that could pass for real, with the kind of light interaction and shimmer you'd expect from an offline render. In Assassin's Creed Shadows, the trees and distant foliage gained dramatically better depth and separation in how light moved from the canopy down through the branches. In the Zorah tech demo, which is a 300 GB courtyard scene built by 20 full-time artists, the subsurface scattering on foliage was just as impressive as anything happening on character faces. Leaves picked up that translucent glow from backlighting that is incredibly difficult and expensive to model and render through traditional means. Note: these are photos taken of a screen, so expect some glare/lighting impact. The AI model powering DLSS 5 is a single unified model. Same model for every game. It's not trained per-title, per-face, or per-object type. It takes the raw color buffer and motion vectors as input, analyzes the scene semantics from that single frame, and enhances the lighting and material response while staying anchored to the original 3D content. It recognizes the difference between skin and metal and water and stone and foliage, and it processes each of those materials differently based on how light should interact with them. That's not a filter. That's a fundamentally different approach to how the final image gets assembled. And it's deterministic and consistent from frame to frame, which is a hard requirement for games. The Developer Angle Matters More Than People Realize One of the things I came away most encouraged by is the developer control story. This is critical. If DLSS 5 were a black box that slapped a one-size-fits-all enhancement over every game, the artistic intent concerns would be completely valid. But that's not what this is. Note: these are photos taken of a screen, so expect some glare/lighting impact. During the demo, the DLSS research talked through the level of granularity available. Developers don't just get an on/off switch. They get intensity controls that can be dialed anywhere, not just full strength. They get spatial masking, so they can set the water enhancement to 100%, wood to 30%, characters to 120%, all independently within the same scene. They get color grading controls for blending, contrast, saturation, and gamma. All of this runs through the existing SDK, which means studios already using DLSS and Reflex have a familiar pipeline to work with. Note: these are photos taken of a screen, so expect some glare/lighting impact. The developer support list tells you something. Bethesda, CAPCOM, Ubisoft, Tencent, Warner Bros. Games, and others have already signed on. But what struck me more than the names was what the Nvidia team shared about the reactions inside those studios. When developers previewed the technology, their technical artists were apparently co-advocating for it internally, because it gets them closer to what they actually intended their characters and environments to look like when they were designing them in their authoring tools. Then those assets get dropped into a real-time game engine with a finite performance budget, and compromises happen. DLSS 5 lets them claw back some of what gets lost in that translation. I think that's the right framing. DLSS 5 isn't Nvidia applying its stylistic choices on top of someone else's game. It's providing a tool that helps developers close the gap between what they can render in 16 milliseconds and what they actually want the player to see. That's a meaningful distinction, and it's a big reason why the developer response has been positive. The Hardware Story Is Interesting Too The demos I saw were running on a pair of RTX 5090 GPUs. One was handling the game rendering, the other was dedicated entirely to running the DLSS 5 AI model. Nvidia was upfront that there's still significant optimization work to do, and the plan is to ship DLSS 5 running on a single GPU when it launches later this year. Note: these are photos taken of a screen, so expect some glare/lighting impact. But I think the dual-GPU setup itself is worth mentioning. For years, multi-GPU gaming has been effectively dead. SLI is gone. CrossFire is gone. The idea that you'd run two graphics cards for a better gaming experience felt like a relic of the mid-2000s. And yet here we are, with a legitimate use case where a second GPU running an AI workload alongside a primary rendering GPU produces a dramatically better visual result. Note: these are photos taken of a screen, so expect some glare/lighting impact. Is that where this ends up for enthusiasts? Probably not at launch. But the concept of dedicating GPU compute specifically to AI-driven visual enhancement, separate from the rendering pipeline, is an interesting architectural idea. It wouldn't surprise me if that becomes a real conversation again as neural rendering matures. Where This Goes From Here DLSS 5 is targeting a fall 2026 launch, which means we've got several months of optimization and refinement ahead. Developers are just getting their hands on it now, and they'll need time to work with the controls and dial in the right settings for their specific titles. First-wave games include Starfield, Assassin's Creed Shadows, Resident Evil Requiem, Hogwarts Legacy, Phantom Blade Zero, The Elder Scrolls IV: Oblivion Remastered, Delta Force, and more. It's also worth noting that this works across rendering approaches. Rasterized games, ray-traced titles, and path-traced experiences all benefit. And the higher the fidelity of the input, the better the output. DLSS 5 isn't replacing good rendering. It's amplifying it. The early social media reaction is predictable. New technology that changes how games look will always generate strong opinions, especially when AI is involved. But the knee-jerk "it's just a face filter" take doesn't hold up once you've actually seen the full scope of what DLSS 5 is doing across an entire scene, across multiple games, in real time. Go look at a coffee maker. Go look at stone textures. Go look at the way light passes through a leaf. That's where the real story is. What do you think, is neural rendering the next big unlock for game visuals? I'd love to hear from people who have spent time with these games. This opinion piece was originally published on X and is reproduced here with permission.
[22]
'They're completely wrong': Nvidia CEO Jensen Huang responds to DLSS 5 criticism
During the Nvidia GTC 2026 keynote, CEO Jensen Huang debuted DLSS 5 calling it the "fusion of 3D graphics and artificial intelligence." The announcement was met with immediate backlash across social media. For the unaware, DLSS is a "deep learning super sampling" feature on Nvidia graphics cards that is used to upscale the resolution of video games. We went eyes-on with version 4.5 recently and my colleague Jason England said it was "the final piece of the puzzle that brings the vision of smooth, responsive AI fueled gameplay to life." DLSS 5 takes those engines and uses AI and neural rendering to infer how games would look in more photorealistic environments, or as one colleague put it, "They added more light." Huang faced questions over the criticism of DLSS 5 during a Q&A at GTC 2026. Our friends at Tom's Hardware at were able to ask about the backlash. "Well, first of all, they're completely wrong," Huang said. "The reason for that is because, as I have explained very carefully, DLSS 5 fuses controllability of the of geometry and textures and everything about the game with generative AI," Huang added. To be fair to Huang, he was clear about this during the keynote and said nearly the same thing. "DLSS 5 is the GPT moment for graphics -- blending hand-crafted rendering with generative AI to deliver a dramatic leap in visual realism while preserving the control artists need for creative expression," Huang said in his introduction. Between the introduction and the Q&A, Huang had been adament that DLSS 5 and its generative capabilities "doesn't change the artistic control." "It's not post-processing, it's not post-processing at the frame level, it's generative control at the geometry level," he said. The video does no favors Part of the problem is that the demonstration video shows DLSS 5 with its max sliders turned on. And as critiques pointed out, just about every "upgrade" from DLSS off to DLSS 5 turned on looks like someone slapped a terrible Instagram filter on top of the game. In some cases, as with Hogwarts Legacy, everything received more of a cartoon-y look. The Resident Evil: Requiem differences seemed to turn Leon and Grace into looksmaxxing influencers. I would argue the best use of DLSS5 is actually shown off in EA's FC26. That game is going for a more photorealistic look with the players. The lighting and skintone improvements look particularly impressive on the Netherlands' Virgil van Dijk. Again, Tom's Guide's Jason England was able to take a look at more images that show off DLSS 5 not just on AI faces but also in environments. "DLSS 5 is absolutely a breakthrough," he said. However, he noted that he has questions, and just running DLSS 5 on its own can take away what may have been intentional shading or lighting. Huang says it's a tool that developers can choose to use. We'll know more this fall when DLSS 5 is set to launch. And I'm certain more demos will be provided between now and release. Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.
[23]
Nvidia Wants to Slop-ify Your PC Games With Its New AI Upscaler
You'll love DLSS 5 if you can't get enough of ugly, uncanny, and overly textured AI slop. Tired of DLSS 4.5? Nvidia said DLSS 5 is on the way, and it's bringing real-time slop-ification to PC gaming. The best I can say is if you love uncanny and overly manicured AI images of people, you’ll love what Nvidia hopes to do to your favorite game. Nvidia announced at GTC 2026 that DLSS 5 brings forth a new “real-time rendering model†that is supposed to make in-game lighting effects look even more realistic bouncing off characters’ skin. Actually, it does much more than that. Nvidia said that AI takes each game’s color and motion vector and “infuses†the scene with new materials. Like many generative AI video apps, the AI is supposed to be able to keep the images consistent from frame to frame. And just like an AI video, the characters in-game look like homunculi drafted from the bargain bin of internet supermodel stock images. Resident Evil Requiem running with DLSS 5 (left) and without DLSS 5 (right). © Nvidia A lifetime of being betrayed by CGI-rendered and otherwise touched-up game trailers has taught me to wait until I see a game’s graphics firsthand before having any knee-jerk reactions. However, since first witnessing the next update to Nvidia’s DLSS and its impact on games like Resident Evil Requiem, I can’t peel away the rictus grimace off my face. Nvidia's blog post showcasing DLSS 5’s effects on games is full of character models that appear like horrible AI-generated mockups of what we’ve seen in-game. While Resident Evil Requiem’s characters may bear a slight plastic, Barbie-doll appearance without the refined shadows, the DLSS 5 characters look like pure slop. A wizened old witch in Hogwarts Legacy appears with far more facial wrinkles than without DLSS 5. Bethesda’s bland-faced characters in Starfield suddenly appear with pronounced eyebrows and cheekbones that stretch perilously close to the uncanny valley. DLSS 5 will support resolutions up to 4K and should become available with existing titles like Assassin’s Creed Shadows and The Elder Scrolls IV: Oblivion Remastered, and many more besides. Nvidia quotes Bethesda studio head Todd Howard saying “it was amazing†how it brought Starfield “to life.†That sense of "realism" will inevitably impact some games with more stylized characters. Digital Foundry showcased video of DLSS 5 running on the Oblivion remake. Its odd-looking character models on the remaster were updated to resemble the Xbox 360-era title's alien-looking townsfolk. With DLSS 5, those same models appear more like a craggy skin texture adhered to a beaten pineapple. Grace Ashcroft, one of the game's dual protagonists, appears more like AI’s demented imagination of a supermodel stapled over the FBI analyst’s face. The game running with DLSS 5 displays how Resident Evil series' golden boy Leon Kennedy would look if you put the prompt “grizzled horror series protagonist with a boy band haircut†into ChatGPT. Nvidia’s next DLSS update won’t be around until fall when it's fine-tuned its model. Then again, the entire point of this update is to showcase what happens when you combine real-time graphics rendering with AI. Nvidia CEO Jensen Huang said in the announcement post that DLSS 5 blends “hand-crafted rendering with generative AI to deliver a dramatic leap in visual realism while preserving the control artists need for creative expression.â€Â The existing DLSS 4.5 squeezes out competition like AMD’s FSR Redstone and Intel’s XeSS as one of the best upscalers available right now. It manages to preserve details like individual tree leaves and grass better than the competition. In 2026, DLSS 4.5 is one of the main reasons to get a GPU like Nvidia’s RTX 5070 Ti or RTX 5080 compared to a cheaper and more widely available AMD Radeon RX 9070 XT. Good luck finding a 4K-ready Nvidia GPU for any price below $1,000. Nvidia has been refocusing its entire business to promote AI, and that seems to be infiltrating its gaming business more and more. The next big update to DLSS will offer more AI-generated frames with upgrades to 6x frame generation that can match a monitor’s refresh rate. PC gamers already have a hard time coming to terms with frame interpolation technology, designating them as “fake frames.†The gaming crowd won’t likely accept slop-ified characters anytime soon.
[24]
'I could see where they're coming from, I don't love AI slop myself' -- Nvidia CEO tries to defend DLSS 5 again, shortly after telling gamers 'they're completely wrong'
* Nvidia CEO Jensen Huang has defended DLSS 5 yet again * Huang says he can understand where gamers are coming from, and doesn't 'love AI slop' himself * However, Huang clarifies that DLSS 5 'doesn't change anything', rather it enhances every frame in games Nvidia has been on the receiving end of a major backlash from gamers since its DLSS 5 reveal, and its CEO has only added further fuel to the fire with additional comments regarding the controversy around its generative AI 'misunderstanding'. On Lex Fridman's podcast, Nvidia's CEO Jensen Huang addressed the bad feeling surrounding DLSS 5 once again, acknowledging the flak that's been fired at the "content-controlled generative AI" tool. If you recall, Huang's initial response to the backlash was to tell gamers "they're completely wrong" regarding how DLSS 5 works, but that tone has now mellowed - albeit a similar vibe persists. Huang clarified: "I think their [gamers] perspective makes sense, and I could see where they're coming from, because I don't love AI slop myself. I'm emphatic towards what they're thinking. That's just not what DLSS 5 is trying to do. "It's conditioned by the textures and the artistry of the artist. It enhances every single frame, but doesn't change anything." Worries remain The problem is, however, that Huang's statements may fall on deaf ears, as numerous examples show DLSS 5 changing the appearance of character models considerably. Notably Grace Ashcroft in Resident Evil Requiem (as shown above), who almost looks like a completely different character when DLSS 5 is enabled. Nvidia has until later in 2026 to refine DLSS 5 and ensure it's working optimally for developers, but it seems to have glossed over the fact that many gamers aren't convinced about how much 'better' DLSS 5 looks. The main concern stems from generative AI's presence in games to begin with - in whatever form - and how it changes art styles or specific character details in the imagery shared so far. DLSS 5 raises many more concerns besides, and one of those is the potential for game developers to rely on generative AI to 'enhance' visuals or characters to be more lifelike, rather than carrying out handcrafted work (as seen in multiple highly detailed modern games). It doesn't seem like Team Green will backtrack on DLSS 5, especially since it has months to weather the storm of the backlash from gamers. Of course, DLSS 5 will be optional for developers to use - and for gamers to enable - but the fear is that this is about the only positive that gamers can take away from this controversy right now. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course, you can also follow TechRadar on YouTube and TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
[25]
NVIDIA CEO defends DLSS 5 as gamers label it an 'AI slop filter'
NVIDIA revealed NVIDIA DLSS 5 on Monday, a new rendering model that uses AI to add "photoreal lighting and materials" to video game graphics. The internet immediately hated it, criticising DLSS 5 for erasing games' intentional artistic styles by basically adding an "AI slop filter." Now CEO Jensen Huang has responded to DLSS 5's critics, stating that they are "completely wrong." DLSS 5 was unveiled during the NVIDIA GTC keynote, accompanied by a video showcasing the AI tool in action. Displaying clips from games such as Resident Evil Requiem, Hogwarts Legacy, and Starfield, the video compared graphics in their original state to with DLSS 5 turned on, the AI giving them a more photorealistic look. "DLSS 5 is the GPT moment for graphics -- blending handcrafted rendering with generative AI to deliver a dramatic leap in visual realism while preserving the control artists need for creative expression," Huang said in a press release. Unfortunately, gamers aren't sharing Huang's enthusiasm for NVIDIA's new real-time neural rendering model. Many noted that the photorealistic style DLSS 5 applied to the game footage in the demonstration video modified the games' original style, and completely changed characters' appearances to the point where some considered them virtually unrecognisable. NVIDIA's X post announcing DLSS 5 was swiftly inundated with replies criticising the tool, with derision also filling the comments of the demonstration video on YouTube. "DLSS5 makes this look like an AI generated dating profile picture used to scam an old person in another country," said X user @GamersNexus, sharing a screenshot of Resident Evil Requiem character Grace with DLSS 5 applied. "Just looks like every other AI generated image of a 'person.' No character or soul to it. Art loses what makes it impressive when it all looks like generated slop." "Giving games an AI filter is an insult," wrote @kalaelizabeth. "Those aren't even the same characters what the hell." "Like what's the point?" said @thethiny. "Artists spend hours perfecting a model for you to come and replace it with AI Faces? I seriously hate this so much." "This looks horrifically bad, nobody wants an AI slop filter on top of their games," wrote @SynthPotato. The post has over 107,000 likes at time of writing. NVIDIA's post has 66,000. This Tweet is currently unavailable. It might be loading or has been removed. This Tweet is currently unavailable. It might be loading or has been removed. This Tweet is currently unavailable. It might be loading or has been removed. NVIDIA has now attempted to reassure players and clarify exactly what DLSS 5 does, claiming that it won't override games' art direction. Responding to the backlash during a GTC Q&A, Huang insisted that DLSS 5's generative AI doesn't remove artistic control from game developers, but instead allows them to use it as a tool. Developers will be able to direct and "fine-tune" the AI so that it adheres to their artistic style. "[A]s I have explained very carefully, DLSS 5 fuses controllability of the geometry and textures and everything about the game with generative AI," said Huang, as reported by Tom's Hardware. "It's not post-processing, it's not post-processing at the frame level, it's generative control at the geometry level... This is very different than generative AI; it's content-control generative AI. That's why we call it neural rendering." The official NVIDIA GeForce YouTube account has responded to criticism in the DLSS 5 announcement video's comments section as well. "Important to note with this technology advance -- game developers have full, detailed artistic control over DLSS 5's effects to ensure they maintain their game's unique aesthetic," NVIDIA wrote in a pinned comment. "The SDK includes things like intensity, color grading and masking off places where the effect shouldn't be applied. It's not a filter -- DLSS 5 inputs the game's color and motion vectors for each frame into the model, anchoring the output in the source 3D content." Posting on X, NVIDIA's global PR director Ben Berraondo stated that the Resident Evil Requiem DLSS 5 demonstration was worked on by developers Capcom. Starfield developer Bethesda Game Studios also wrote on X that its "art teams will be further adjusting [DLSS 5's] lighting and final effect to look the way we think works best for each game. This will all be under our artists' control, and totally optional for players." "With DLSS 5 the artistic style and detail shine through without being held back by the traditional limits of real-time rendering," Bethesda Game Studios' executive producer Todd Howard said in a statement on Monday. "We're excited to work with this new technology and look to bring DLSS 5 to Starfield and future Bethesda titles." This Tweet is currently unavailable. It might be loading or has been removed. Even so, gamers remain unappeased, turning their ire onto the executives promoting NVIDIA's AI tool. Though developers may be able to prompt or direct DLSS 5, using AI to alter an image is not the same as manually creating it themselves. "Todd realizing he doesn't have to update the prehistoric creation engine because now he can simply fake good graphics with a snapchat filter," wrote @sean_gause. "If you want to upgrade graphics, hire talented artists to do it instead of using technology that exploited YOUR OWN STUDIO'S COPYRIGHTED WORK -- which they originally did without your consent, regardless of you choosing to sell your soul now -- that will only wipe out all creative direction in exchange for mid AI slop that no one wants to look at," wrote @homemadehooplah. It isn't clear what datasets NVIDIA used to train DLSS 5, or how it may have obtained them. However, training AI models is a fraught area, with multiple companies having been accused of using stolen or misappropriated data. NVIDIA's side-by-side comparisons showing off its DLSS 5 model have quickly transformed into a meme. Social media users are sharing images of well-known characters, comparing them with similar yet very different versions that ostensibly have DLSS 5 applied. These DLSS 5 versions are often grotesquely detailed, or have been heavily altered to have generic, highly airbrushed features reminiscent of aesthetics in pornography. Several independent game developers have gotten in on the joke as well. This Tweet is currently unavailable. It might be loading or has been removed. This Tweet is currently unavailable. It might be loading or has been removed. This Tweet is currently unavailable. It might be loading or has been removed. This Tweet is currently unavailable. It might be loading or has been removed. This Tweet is currently unavailable. It might be loading or has been removed.
[26]
Nvidia DLSS 5 might be the future of graphics, and I still want a giant "Off" button
Neural rendering is cool and all, but "yassifying" game characters... less so For years, photo-realism was seen as the ultimate goal for next-gen games. Ray-tracing was a solid step forward. And then came super-resolution and super-sampling upgrades. Yet, when Nvidia showcased its next great advancement for video game visuals, the fifth-gen Deep Learning Super Sampling, it stirred a furor. Interestingly, DLSS 5 is not just another version of DLSS with a few cleaner edges and a better performance story. Nvidia is pitching it as a real-time neural rendering model that can add more photoreal lighting and material detail to a game frame, which is a much bigger shift than plain upscaling. That's a bold technical swing, and a risky aesthetic one. It sounds impressive, and to be fair, part of it genuinely is. If DLSS 5 works as intended, it could help games look richer without developers brute-forcing every lighting effect the traditional way. Recommended Videos Announced at GTC, DLSS 5 is set to release in the fall of 2026 as Nvidia's biggest graphics leap since real-time ray tracing. But the first reaction wasn't applause, it was memes about "AI faces", "AI slop", and "yassified" characters. While Nvidia insists we're all wrong, it still begs the question: do we actually need this? What does DLSS 5 even do, and is it actually useful? Nvidia says DLSS 5 takes each frame rendered by the game, plus motion data, to generate more photoreal lighting and materials in real time. On paper, it should better handle things like skin, hair, and fabric. The company is also positioning it as part of a broader neural rendering future, rather than a one-off gimmick. For photoreal games chasing more realistic lighting, this is a compelling pitch. This isn't meant to be a blind, one-click beauty filter either. Developers are supposed to get full control over intensity, color grading, and masking. DLSS 5 also integrates through Nvidia Streamline, meaning studios can decide exactly where the effect applies (and where it doesn't). There is a fair pro-DLSS 5 argument here. Traditional rendering is expensive, especially when developers want cinematic lighting without sacrificing frame rates. A tool that can bridge some of that gap could absolutely benefit players, particularly in big-budget, realistic single-player games. If it's so advanced, why does it keep getting called an AI filter? It didn't help that at the sidelines of GTC, Nvidia chief Jensen Huang said gamers are getting it completely wrong with DLSS5. But if that's the case, why is the criticism almost in unison? That's because ecause the criticism is not just people yelling "AI bad" on autopilot. A big reason the "AI filter" label stuck is that some of the public explanations make DLSS 5 closer to smart image reinterpretation than something deeply aware of a game's full 3D scene. According to Nvidia's Jacob Freeman, the system takes the rendered frame and motion vectors as inputs, while keeping the underlying geometry unchanged. That is exactly why critics are uneasy. If DLSS 5 is mainly working from a 2D frame plus motion information, then it is still guessing. And this guesswork is how you end up with that uncanny, over-baked look people immediately noticed in early demos. Once a GPU feature starts changing facial tone, lighting mood, or the overall feel of a scene, people stop seeing it as a harmless enhancement and start seeing it as aesthetic interference. Death of artistic intent? This is the biggest question hanging over DLSS 5. Nvidia CEO Jensen Huang has defended the tech aggressively, emphasizing that developers get full control of intensity, grading, and masking. That all sounds reassuring in theory, but my eyes say otherwise. In the demo, DLSS 5 noticeably shifts color grading and contrast in ways that make you question whether developers actually opted into those changes. Resident Evil Requiem has one of the most jarring showcases of this tech, with Grace getting what looks like a subtle makeup applied to her eyes and lips. Other examples, like Starfield, also reinforce this oddly generic look, one that adds "detail" without necessarily adding to the immersion. Going by various videos and posts online, both gamers and some developers were put off by the beauty-filter effect in character faces. And while Nvidia claims developers will have full control, some were blindsided by the announcement altogether, including people working at major studios like Capcom. One developer at Ubisoft even said, "We found out at the same time as the public." When the key selling point becomes "look how much the AI changed this," it is hard to blame people for asking whether the original art direction is being preserved or overwritten. Are gamers overreacting or spotting a real problem early? The community response has been messy, but it is not baseless. Reddit threads are full of people calling DLSS 5 "AI slop," with valid complaints of the tech wiping out moody lighting, homogenizing visual style, and making games look plasticky or uncanny. These blunt reactions also point to a real fear, where a single AI model could have two very different games have the same glossy Nvidia-approved look. My take is simple: DLSS is not automatically doomed, and it is not fair to dismiss the tech as worthless. But Nvidia is asking players to trust an AI layer with something more important than frame rate, which is a game's visual identity. That is a much harder sell. Until DLSS 5 proves that it can enhance games without making them feel AI-treated, the criticism is not just valid, it is necessary.
[27]
Nvidia's AI Yassification Feature Gives "Starfield" Character Grotesque "Giga-Nostril"
But wait: if you actually look at it extra carefully, you'll discover that -- yeah, it still sucks. Outraged gamers unwilling to let the controversy die down have latched onto a hilarious visual screw-up in Nvidia's presentation of its DLSS 5 feature that looks like an AI nose job gone wrong. In a screenshot of Nvidia's tool being applied to a character from the game "Starfield," the AI seemingly turns a facial shadow into a monstrous "giga-nostril." Is this uncharitable cherry picking? Ask Nvidia, since it decided to display this exact same screenshot in its official announcement. "That's a nostril big enough to inhale the required amount of copium to believe that DLSS 5 will be useful," jeered one gamer in response to a Reddit highlighting the visual carnage. "The artist couldn't express how huge this nostril is, DLSS helped," joked another, while many observed that the eye colors appeared to be slightly mismatched as well. Nvidia announced DLSS 5 on Monday and immediately caused a volcanic eruption of outrage. Whereas previous iterations of DLSS tech focused on upscaling lower resolution graphics to boost framerates without sacrificing as much image quality, the new version uses a generative AI model to plaster often uncanny and hyperreal details onto the original imagery. Character faces look like they were Facetuned to conform to bland beauty standards. Resident Evil's Grace Ashcroft received an AI makeover in the form of hollower cheeks and poutier lips. The thrust of most of the criticism, beyond the feature looking like AI slop, was that it undermined artistic intent and effaced a game's original aesthetic. Nvidia CEO Jensen Huang struck back against this framing, calling gamers "completely wrong" -- the customer isn't always right, it seems -- and insisting that "direct control" was still in developers' hands. Huang unloaded a bunch of jargon about how DLSS 5 "fuses controllability of the geometry and textures and everything about the game with generative AI." It's "not post-processing at the frame level," but "generative control at the geometry level." He even insisted it's not generative AI, but "content-control generative AI." The long and short of it is that Huang is trying to make it seem like DLSS 5 doesn't act like a glorified AI filter. And in its announcement, Nvidia describes DLSS 5 as taking a "game's color and motion vectors for each frame as input, and uses an AI model to infuse the scene with photoreal lighting and materials that are anchored to source 3D content." But PC gaming YouTube Daniel Owen released a video about an email exchange he had with Nvidia's Jacob Freeman in which Freeman admits that it essentially acts as a filter. "DLSS 5 takes a 2D frame plus motion vectors as input," Freeman says during the exchange. In other words, the feature isn't seeing 3D lighting and geometry from the game itself, and instead is working on what amounts to real-time, flat screenshots.
[28]
Resident Evil devs were 'in the dark' about DLSS 5 reveal
Nvidia's DLSS 5 controversy continues to simmer after the chipmaker showed off examples of the AI-powered technology on Monday. While Nvidia reps claimed that DLSS 5 represents "the most significant breakthrough in computer graphics since the debut of real-time ray tracing in 2018," observers were quick to point out that the technology was significantly altering the artistic intent of developers, and not for the better, as evidenced by the bizarrely yassified face of Resident Evil Requiem's Grace Ashcroft that's been making the rounds on social media. According to Insider Gaming, developers at Capcom were largely unaware the studio had entered into this partnership with Nvidia. Nvidia plans to launch DLSS 5 this fall, and the tech will support games from developers like Bethesda, Capcom, NetEase, Tencent, Warner Bros., and Ubisoft. Anonymous sources at Capcom told Insider Gaming the announcement was especially shocking given that the company has been very "anti-AI" when it comes to Resident Evil Requiem and other unannounced titles. Staff at Ubisoft were equally taken aback, with one saying, "We found out at the same time as the public." Meanwhile, Ubisoft's Charlie Guillemot praised the tech in an Nvidia press release. "Immersion is about making the world feel real. DLSS 5 is a real step towards that goal," Guillemot said. "The way it renders lighting, materials and characters changes what we can promise to players. On Assassin's Creed Shadows, it's letting us build the kind of worlds we've always wanted to." It's not uncommon for executives to keep rank-and-file staff in the dark about certain business decisions, but the Nvidia partnership may suggest a potential culture shift for Capcom. For one thing, the studio invested significant time and resources to ensure Grace and Leon felt like real human beings in Requiem. The actor who portrayed Grace, Angela Sant'Albano, recently told Polygon that the role required hours of grueling physical performance. She wasn't alone in a recording booth, but in a motion-capture studio with her scene partners. "It brought humanity to the game in a way that makes it feel that much more real, so the story can shine, which was wonderful for an actor. I think that's kind of the dream," Sant'Albano said. Elsewhere, Leon S. Kennedy actor Nick Apostolides told Polygon that his deep love for the series is a major reason his portrayal continues to resonate with audiences: "The OG fans also respect what I've done because they know I respect the source material, and I respect the performances that came in the past," he said. Even if studios don't intend to use DLSS 5 to alter human performances, what we've seen so far suggests that the technology certainly runs the risk of distracting and detracting from those performances. In any event, we certainly haven't heard the last of this controversy. Related Nvidia CEO says the gamers rejecting the company's controversial AI are 'completely wrong' CEO Jensen Huang doubles down, missing the point about criticism of Nvidia's DLSS 5 Posts 25
[29]
Game devs say Nvidia's DLSS 5 reveal blindsided them
Despite being planned for fall 2026 release, DLSS 5 already raises concerns about artistic control and whether developers want this AI-enhanced visual processing in their games. Nvidia DLSS 5 is coming later this year, adding generative "AI" features to the performance-enhancing tech. Gamers are calling the tool an "Instagram yaas filter" and "AI slop," among other, less kind terms. The way that it adds detail to faces and seems to hijack -- or replace? -- the game's natural lighting was so striking that Nvidia's CEO had to issue a rare response to the blowback. But in its brief demonstrations, Nvidia has positioned this as a developer tool to enhance visuals, something that's optional and within the artistic control of game creators. So what did the game creators themselves say before the GTC demo? Not much, it would seem, since at least some of them had no idea Nvidia would be using their game visuals to show off the technology. "We found out at the same time as the public," said an Ubisoft developer, speaking to Insider Gaming. Developers at Capcom expressed similar shock. Capcom's Resident Evil: Requiem was the showpiece example in Nvidia's one-minute demo reel, and Ubisoft's Assassin's Creed Shadows was mentioned as a launch title for when DLSS 5 is set to arrive in fall 2026. It should be pointed out that DLSS 5 is in its very early stages at the moment. It was barely a blip in Jensen Huang's GTC presentation, which was mostly about the company's more industrial "AI" hardware. Since we're months and months away from actual implementation, it's safe to assume that most of the people working on the mentioned games were not aware of the announcement beforehand, and those who were told were presumably sworn to silence by embargoes and non-disclosure agreements. Insider Gaming also doesn't mention specific names, so we don't know if they were talking to high-level producers or the most junior employees. The crew on PCWorld's The Full Nerd podcast discussed the announcement at length on Tuesday, just one day after the reveal: The fallout from the DLSS 5 announcement is hard to ignore. Gamers, already primed to be wary of Nvidia after half a year of skyrocketing hardware prices due to the "AI" bubble, and inundated with generative "AI" content from every angle (including new game releases big and small), aren't feeling particularly well-disposed. And that's just the preamble. The actual implementation of DLSS 5, which appears to be more of a frame-by-frame AI filter than a performance upgrade like super-sampling or frame generation, has artistic and aesthetic issues as well. PCWorld's Mark Hachman calls it "simply the sprinkling of AI content on top of games, devaluing them in the process." And, unlike previous DLSS implementations, it seems the additive features of DLSS 5 come with a resource cost rather than a boost. Nvidia's GTC demonstrations had the games running on two top-of-the-line RTX 5090 GPUs, a setup we haven't seen since the glory days of SLI. One card was running the game, another was applying the generative tech on top of it. Nvidia says it will run off a single GPU when it's ready to hit end user PCs. But the message seems clear that this will require a lot of extra power... at time when PC hardware has never been more expensive and scarce. A situation that's largely to the benefit, and to some degree the creation, of Nvidia itself.
[30]
Nvidia's DLSS 5 Launch Sparks Meme Frenzy as Gamers Balk at AI 'Neural Rendering' - Decrypt
Viral "DLSS OFF vs ON" memes captured concerns that the tech changes artistic intent rather than simply improving performance. Jensen Huang called it the "GPT moment for graphics." The internet called it a "yassification filter" with a $1,500 GPU requirement. At GTC 2026 this week, NVIDIA unveiled DLSS 5 -- its most technically ambitious graphics feature to date, and almost certainly its most memed. Unlike previous DLSS versions, which focused on upscaling or frame generation, DLSS 5 goes full neural rendering. It takes a game's color buffer and motion vectors and then reinterprets them. Skin gets subsurface scattering. Fabric gets that cinematic sheen. Hair, lighting, shadows, all dialed up toward what NVIDIA describes as Hollywood-level photorealism, generated in real time. Think less "upscaling" and more "a second AI artist repainting your game every frame." Early demos ran on dual RTX 5090s. One GPU for the game, one for the neural model. But NVIDIA says single-GPU support is coming ahead of a Fall 2026 rollout. Big titles like Assassin's Creed Shadows, Starfield, Resident Evil Requiem, and Oblivion Remastered are already lined up. Developers can tweak intensity, masking, and colour grading to preserve their intended look. That last part turned out to be doing a lot of heavy lifting. The tech press loved it. Everyone else, not so much. Hands-on previews praised the lighting and detail as "astonishing," especially on faces and environments. Developers echoed the hype, with Starfield director Todd Howard saying it "brought [the game] to life." But the internet saw something else entirely. YouTube comments, Reddit threads, and gaming forums lit up with terms like "AI slop," "uncanny valley," and "Instagram filter gone wrong." Resident Evil Requiem's Grace Ashcroft became the flashpoint, with side-by-side comparisons showing a version players described as plastic, airbrushed, and weirdly over-enhanced. Then came the memes. The format hit instantly: "DLSS 5 OFF vs ON." OFF was the original art. ON was... something else. Kratos with full makeup. Patrick Star turned into a hyper-real nightmare. Even Jensen Huang got the treatment. It spread fast enough that even major creators and devs joined in. And that's the thing -- gamers have been fine with DLSS for years. Upscaling, frame gen, all of it. Because it was invisible. It helped performance without changing the art. DLSS 5 breaks that contract. This isn't just enhancing an image. It's making decisions about how that image should look. When the AI hits a character's face, it's not asking what the artist intended. It's applying its own idea of realism. That shift, from tool to taste, is what people are reacting to. Because at that point, it's not just about better graphics. It's about whose graphics they are.
[31]
Deciphering DLSS 5: PC gaming breakthrough or Nvidia's AI slop era?
For the first time in a long while, Nvidia GTC 2026 gave us some huge PC gaming news. CEO Jensen Huang introduced DLSS 5: the "fusion of 3D graphics and artificial intelligence" in his own words, and the response has been...mixed to say the least. From outlets calling it the most impressive tech they've seen in a long time to people throwing out AI slop critiques and comparing it to those face filters you get with certain smartphone cameras, opinions are all over the place. And honestly, I didn't know how to feel at first -- like that damn dress meme in 2015, my mind changed every time I looked at the comparison videos! So I did what I always do. I followed my late Grandad's advice to "sleep on it, because you'll know how you feel in the morning." And in short, DLSS 5 is absolutely a breakthrough, but game developers have until the fall to find the sweet spot for it in their games. Let me explain. What is DLSS 5? Let's explain it with pizza It feels literally like yesterday that I was talking about DLSS 4.5, but we're already marching onto the next iteration -- and it's a big one we've known was coming for a while now. "DLSS 5 is the GPT moment for graphics -- blending hand-crafted rendering with generative AI to deliver a dramatic leap in visual realism while preserving the control artists need for creative expression," Huang commented. It's a fusion of the "predictive" model that's fueled DLSS for a while now with the "probabilistic" elements of generative AI to bring photorealism to games with cinematic lighting, enhanced material depth, real-time neural rendering and temporal consistency. And all of its capabilities are controllable by the game developers, as they can tune the intensity, color and masking to find an enhancement balance that is right for them. To break this down, I'm feeling hungry and you know what that means... Back to the pizzeria I go! Think about previous versions of DLSS as a magic magnifying glass. I love that 8-inch pizza, but I want more of it, so DLSS takes that pizza and stretches it to look like a 16-inch XL -- making it bigger without the crust getting too thin or the cheese burning more easily (this is resolution scaling). Then every now and again, the chef slides an extra few slices into the box between every real slice you actually order, so it feels like you're eating more at a faster pace (frame generation). Now, with the fifth generation, there's an AI master chef that doesn't just stretch the pizza, it re-imagines it. With settings turned up to max, the chef looks at that cheap pepperoni slice and says "I know what you were trying to do here, but I can do better," and swaps it with artisanal, hand-cured salami and fresh buffalo mozzarella -- even though you didn't order those things. Basically, DLSS 5 has stopped trying to "show you the game better," and is now resorting to "showing you a better version of the game." Bridging the uncanny valley We can't say any of us didn't see this coming. Jensen himself talked about it at CES 2026 in a behind-closed-doors Q&A session and shot his shot at a growing "fusion between rendering and generative AI." "In the future, it is very likely that we'll do more and more computation on fewer and fewer pixels. By doing so, the pixels that we compute are insanely beautiful, and then we use AI to infer what must be around it." Huang said. He talked about the "utterly shocking and incredible" results he saw in the labs that looked like "basically a photograph interacting with you at 500 frames per second." And now we have our first glimpse at it. In some of the game videos it's a significant improvement, but in others you can start to spot some creative challenges that will surely be worked out as we close in on an official launch. Let's start with the faces (yep, we've got to talk about Grace's face in Resident Evil: Requiem). There was a recent brain scan study that showed our brains process "hyper-realistic AI faces" differently than real faces. A sudden spike in activity around 600ms after seeing an image that triggers an internal uneasy mismatch feeling. That's what I believe is happening here, and a lot more of it comes down to the cinematic lighting than I initially thought. Based on my time pixel peeping videos and image comparisons, I'd say about 60% of Grace's face tweaks here can be explained by lighting and material depth. The remaining 40% is neural rendering -- there are definitely fuller lips and sharper jawlines (not that Leon Kennedy needs it, being the smokeshow he is). In some places, this photorealism really shines -- games like EA Sports FC and Starfield truly benefit from this upgrade. But I can appreciate the view around creative interpretation vs AI creating a jarring effect. There are some other things I noticed, too. Surfaces and textures have been given a serious upgrade, but with DLSS' reinterpretation of lighting, some of it feels less stylized or moody. Take this scene from Nvidia's Zorah Demo, for example. Before, there was a warmer hue and intentional shadowing to add depth, but that is re-interpreted with the cinematic lighting in a way that I feel loses the vibe a little. All-in-all, DLSS 5 is a diamond in the rough. I can see what the intention is, but this is all completely in the hands of devs to use however they wish. Maybe Capcom rolls back on the face tech, or Warner Bros. alters the cinematic lighting. It will take time to find the right balance. I've got some questions But of course, this is one PC gamer who's tested all the best GPUs and the tech that enables them -- pixel peeping videos and screenshots. I think it's something we all have to interact with to get a fuller understanding of DLSS 5. And if Nvidia's reading this (hi btw), before I (hopefully) get some hands (and eyes) on time with it, I do have some questions to get a better understanding of what's going on under the hood: * What is that real-time neural rendering and what has it been trained on? * How big is the DLSS 5 model now, and what is the goal for it by fall? Currently demos showed on 2 RTX 5090s -- so optimization/compression is key. * What are the controls for more stylized games? Titles that don't necessarily benefit from photorealism with cel-shaded or more artistic graphics. DLSS5 outlook The timing of these announcements always makes me chuckle. I know it may be unintentional, but it just feels like once another GPU company announces an update to its upscaling/frame generation tech, Nvidia just says "hold my beer" and takes another giant step ahead of the pack. Intel XeSS 3 with multi-frame generation? How about a little DLSS 4.5. AMD FSR Diamond? Well, here's DLSS 5. And honestly, it is a breakthrough -- but I can totally understand the mixed response. In some titles, it is a generational leap forward. In others, it can feel like an AI veneer. But ultimately, the tech is here and devs can turn it up or down however they want. Nvidia isn't adding slop to games, that team is marching forward to photorealism with another feather in the cap of developers to bring their visions to life. Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.
[32]
I really thought Nvidia's DLSS 5 was going to be smarter than this
The latest details suggest a real disconnect between the ground truth of a game and the DLSS 5 layer. We've been trying to get answers out of Nvidia about DLSS 5 for a couple days now, and we're still waiting to hear if we'll get them. But techtuber Daniel Owen has got some. Detailed in a new video, Jacob Freeman, GeForce evangelist, has provided some rather enlightening details about what DLSS 5 is actually doing. And it seems right now to be far less smart than I expected. Given the demos over at the GPU Technology Conference (GTC) this week were being run using a pair of RTX 5090 graphics cards -- one to render the games normally and another $4,000 GPU to run the DLSS 5 compute path -- it seemed like maybe there was something beyond just the AI filter it seemed on first flush when Jen-Hsun dropped it at the beginning of his GTC keynote this week. But no, answers direct from Nvidia itself suggest that all the current early preview iteration of DLSS 5 is using as an input is a static, 2D image. As Freeman says: "DLSS 5 takes a 2D frame plus motion vectors as input." So, unless Freeman is grossly oversimplifying things here, it really is essentially taking a screenshot of a game and applying an AI filter to it. Sure, it's impressive that Nvidia has delivered the compute pathways to allow for this to be done at such a rate of licks that it can effectively be used in real-time during a scene, and that it seems to be able to maintain consistency between those frames, too, but the actual technical elements of the DLSS 5 'enhancements' don't really sound that in-depth. The DLSS 5 model is only ever aware of the motion vectors attached to a static image (where objects in the scene have come from and where they're going) and a single 2D image. It has no understanding, beyond the flat surface of that frame, of the 3D geometry or depth of a scene, or of the specifics of any lighting found outside of the image in front of it. Freeman notes the DLSS 5 model has been trained like this, and is designed to be able to infer information about "complex scene semantics such as characters, hair, fabric and translucent skin, along with environmental lighting conditions like front-lit, back-lit or overcast -- all by analysing a single frame." So, it all just comes down to what it can infer from a 2D image, and is not able to be given any "ground truth" about what is actually feeding into that scene. It's apparently completely limited to screen space and the model has zero awareness of anything that sits outside of the single image it's working on. A best guess is okay in some cases, sure, but we're talking about going down a probabilistic path for things like environmental lighting when, if you're rocking path tracing, you have very definite areas and sources of lighting. And definite lighting is an area where developers can have very definite ideas about how they want their game to look in the final reckoning. DLSS 5 isn't going to help there if it's just taking a punt at what it thinks it should look like. Owen also asks specifically about concerns around the underlying geometry and textures appearing to be materially changed by DLSS 5 as well as about Nvidia's assertions that the feature can "enhance PBR [physically based rendering] properties on materials (roughness, more realism), with more realistic interaction of light." They note the changed hairline of a model in Starfield and the entirely problematic issue of the, what will forever be known as, 'yassified Grace'. While Freeman notes, as Nvidia has explicitly stated before, the underlying geometry isn't changing, that doesn't automatically mean that you're still going to see it. What it seems to be doing is that the DLSS 5 model may simply paint something else it prefers over the top of the underlying geometry so that almost becomes a moot point. On the PBR side of things, again things feel far more simplified than I expected. There is apparently no level of DLSS 5 that is hooking into the game engine so the model has the sort of hooks that can tell it what to expect from a surface -- what material it is, whether it's wet, how rough it is, etc. -- so the only way it can "enhance PBR properties" is by 'looking' at them and taking an educated guess as to what they are. It doesn't actually have any access to what the developers have put into their world, just inference. "Materials are inferred from the rendered frame," says Freeman, noting again that there are no other inputs. The other concerning detail of the Nvidia responses surround just what dials and levers developers have to retain artistic control over a scene. And it seems that's kinda all they are. I naïvely assumed, from my experiences with gen AI, that there would be some kind of prompt mechanism, where the developer might be able to tune the DLSS 5 model, to adjust the level of 'heat' or to rein in its wilder creative impulses, or maybe ask for certain things to be added or adjusted in a scene. But no, it seems you get a kind of slider so you can choose the intensity of the effect, using alpha blending to weight a scene more towards the original render or AI output, colour grading control, and the ability to mask off objects or parts of a scene to keep them out of DLSS 5's reach. If, as happens to Grace, a character is given what seems like a full face of make up in a scene where that doesn't really make sense, the developers seemingly have the option to either dial the output down so you can't really see it, or turn it off entirely. They apparently just can't ask for another pass without the lip gloss. Then you have to circle back to the fears folk have expressed about potential homogeneity arising from a single model deciding what our game characters look like. Sure, you're not changing the underlying geometry, but if the same DLSS 5 model is painting over your characters they surely run the risk of starting to look a lot like each other. What if there was another sad lady with cheek bones you could cut yourself on rocking a blond bob, wouldn't they look an awful lot like yassified Grace? The masking is an odd one, however. As Nick on our team points out, to be able to selectively mask objects there must be some sort of understanding of depth for DLSS 5 to be able to consistently not yassify someone or something. The more we hear about Nvidia's DLSS 5 feature, the worse it seems to get. Which is honestly counter to what I was expecting. I was hoping we'd have some insight from developers who have used it to highlight just what controls they have over the model, and how they go about retaining the artistic expression which has been at the heart of many peoples' consternation about the technology. But I am not here right now to question the ethics of its implementation -- that's a whole other topic for ire -- nor to deny the fact that I do, in some circumstances, think it looks pretty good. I like what I saw of Assassin's Creed Shadows' environments, and I'd absolutely play FC 26 with it enabled. I just thought it was doing something a little smarter behind the scenes with all that compute it's using. And maybe it still is. Maybe Jacob Freeman isn't explaining it correctly, or hasn't the clearance to actually go into detail about what DLSS 5 is technically doing beyond that raw 2D frame/motion vector input. During the GTC keynote reveal Jen-Hsun said: "We fused controllable 3D graphics, the ground truth of virtual worlds, the structured data of virtual worlds, of generated worlds. We combined 3D graphics with generative AI, probabilistic computing. "One of them is completely predictive, the other one, probabilistic yet highly realistic. The content is beautiful as well as controllable. This concept of fusing structured information and generative AI will repeat itself in one industry after another. Structured data is the foundation of trustworthy AI." But right now it feels very much like there is a huge disconnect between the ground truth of a given game world -- Jen-Hsun's vaunted structured data -- and the DLSS 5 frosting being layered on top -- that unstructured AI-generated data. The promised fusion feels rather more layered than I expected given the introduction. I thought the two things were being brough together in some holy union in an effort for each to benefit the other, but it's seeming like far less of a mixing than I'd hoped. However it shakes out, one thing is clear, the reveal of DLSS 5 has been a true omnishambles of an announcement. From the almost context-free drop at GTC, to the yassified Grace becoming the ersatz poster child of the technology, to Jen-Hsun telling everyone they're just plain wrong, to the huge misstep in actually referring to this as DLSS 5 at all. It's all been a painful exercise in mismanagement and mismessaging.
[33]
Nvidia responds to widespread criticism of DLSS 5 by telling us we're all "completely wrong"
Nvidia has responded to the widespread criticism of its new tech known as DLSS 5, which it says will "[bridge] the divide between rendering and reality", but from what we have seen so far mostly appears to layer a gaudy AI filter over a game's original work. The newly-announced AI-powered tech includes an upscaling filter, with Nvidia stating it will allow developers to "deliver a new level of photoreal computer graphics previously only achieved in Hollywood visual effects". In the case of Resident Evil Requiem's Grace Ashcroft, the DLSS 5 filter appeared to alter her original appearance by enlarging her lips and giving her more makeup. Needless to say, this was all met with more than a handful of raised eyebrows, with terms like "AI slop" being bandied about. However, when asked for a comment on the criticism surrounding DLSS 5 from both professionals and fans, Nvidia CEO Jensen Huang said: "Well, first of all, they're completely wrong." Instead, the exec stressed that control over how DLSS 5 is implemented in a game lies with its developers. "The reason for that is because, as I have explained very carefully, DLSS 5 fuses controllability of the of geometry and textures and everything about the game with generative AI," Huang furthered during a Q&A at GTC 2026, which was attended by Tom's Hardware. The CEO added those using the tech will be able to "fine-tune the generative AI" to make it match a game's visual style, claiming that DLSS 5 adds generative capability to existing game geometry, but "doesn't change the artistic control" developers have. "It's not post-processing, it's not post-processing at the frame level, it's generative control at the geometry level," Huang said, before closing: "All of that is in the control - direct control - of the game developer... This is very different [from] generative AI; it's content-control generative AI. That's why we call it neural rendering." This is something that Bethesda has already sought to assure its community about. Yesterday, after the reveal of DLSS 5 and the subsequent backlash, the studio said it would be "further adjusting the lighting and final effect" of the tech's visuals on Starfield and beyond. "This will all be under our artists' control, and totally optional for players," Bethesda said of DLSS 5. As for when we will be able to fully judge for ourselves, DLSS 5 is coming "later this year", Nvidia has said.
[34]
Why do gamers already hate DLSS 5? Here are 3 key reasons -- and why history suggests Nvidia will win them over eventually
You're unlikely to have missed Nvidia's big reveal of DLSS 5 at GTC 2026 this week, as it caused quite some waves. Indeed, it's not an understatement to say that a tidal wave of bad feeling swept across the internet - from Bluesky to Reddit to X - after Nvidia showcased its plans for a new "real-time neural rendering model" that polishes up lighting effects to an eye-opening degree. Such was the graphical reckoning aimed at Nvidia that CEO Jensen Huang felt compelled to take gamers to task over their attitude. I'm not sure about the wisdom of the head-on way Huang tackled the many DLSS 5 skeptics and detractors. But nonetheless, we've seen these kinds of clashes between Nvidia and gamers with DLSS before -- more than once, in fact. And history suggests that Team Green will win over the naysayers eventually. Let's look at the key complaints gamers have about DLSS 5 - I've picked out three critical stumbling blocks - and the ins-and-outs of how valid those concerns are, while turning to said history lesson. 'Thanks, I hate it' reason 1: gamers just don't like how DLSS 5 makes games look Much of the hatred that's been spilling forth on internet forums is due to the look DLSS 5 gives games in the sample screenshots shared by Nvidia (and video footage from Digital Foundry). Many gamers simply prefer the original graphics in those before-and-after comparative screenshots, and hate the effects applied by DLSS 5. A central problem here is the faces of characters looking unreal and, well, AI-generated (or to put it less subtly, as many did: "yassified, looks-maxed freaks"). It should be noted that DLSS 5 does not leverage generative AI from scratch, because as Nvidia has gone to great pains to point out, the game assets aren't changed by AI -- only the lighting. It's about polishing what's already there in the game (although there's some skepticism about that, which I'll return to later). It's not just the 'uncanny valley' effect that's an issue here, though, as some people feel that the overall look of games with DLSS 5 applied is just too sharp, overly bright, and the colors are oversaturated, all of which adds up to an unnatural image -- even though the aim is to produce photorealistic graphics, of course. In short, the end result looks like it's AI generated, as it's all 'too much' in these respects. Or that's the feeling of many, summarized into two words we all knew were coming in a lot of the reaction here: AI slop. (Or indeed: 'Deep Learning Super Slop'). There are further concerns about glitches and artifacts introduced by DLSS 5, too, as evidenced by some of the (very limited) footage of games actually in motion. 'Thanks, I hate it' reason 2: this is messing with the art direction and ambience of games This piece of flak follows on directly from the above point, but is more about DLSS 5 warping the very feel and fabric of a game. Resident Evil Requiem was certainly seized on as a case in point here -- it's supposed to feel gritty and bleak, but that vibe is markedly altered by Nvidia's AI makeover, so it loses some of the dark atmosphere. This goes beyond the application of lipstick to Grace, and alters the background lighting and its effect and relation to the whole horror theme. There are many criticisms along these lines, and I can absolutely see where they're coming from. 'Thanks, I hate it' reason 3: Nvidia's just trying to force gamers to upgrade their GPUs Another element of bad feeling I've seen is that DLSS 5 is also about Nvidia selling more graphics cards. Granted, this isn't nearly as prevalent a beef as the previous two issues, but it's still a sticking point for some. The broad assertion stems from this early demo work with DLSS 5 being run on a pair of RTX 5090 GPUs. Yes, not one, but two Blackwell flagships, with one of those graphics cards running the game itself, and the other applying DLSS 5 on top. This has led to some leaps to conclusions about how demanding DLSS 5 will end up being when the tech is released later this year, and how it's going to make lesser RTX 5000 GPUs sweat. So, the accusation is that this is a way Nvidia can push gamers to buy a new graphics card -- assuming they want to use DLSS 5 at all, mind. Cutting Nvidia some slack -- and a history lesson A lot of these criticisms are fair enough, I feel, although some more than others. I think the last point regarding the necessary GPU power is a misjudgement, though -- Nvidia has made it clear that the final implementation of DLSS 5 will run on a single graphics card. Well, of course it will. Team Green could hardly bring this out if it didn't work okay on a single GPU (and presumably away from 4K resolution, it'll be less demanding, too). How it might work on Nvidia graphics cards below the RTX 5080, though, or whether DLSS 5 is more designed looking towards the RTX 6000 range in terms of more mainstream usage is another matter -- granted, those kinds of doubts remain about where this tech will land. Overall, we've got to assume that Nvidia knows what it's doing scheduling this launch for later in 2026. More broadly, we must remember that DLSS 5 is still in 'early preview', which suggests a lot more honing is still to come. This is why two flagship GPUs are needed at this point, and doubtless why we're seeing some glitching -- or overly heavy-handed implementations of the AI-powered lighting effects. The launch incarnation of DLSS 5 is likely to take a lighter-touch approach, I'd guess, especially given the reaction to this sneak peek at the tech. Nvidia has time to adjust and calibrate here, and I'd expect this to happen. What's more of a worry for me is the whole nest of issues around changing the art direction or vibe of a game -- although Nvidia has stressed very much that developers will have control over the end result with DLSS 5. CEO Jensen Huang defended the tech as 'content-control' using AI, meaning that DLSS 5 just polishes existing assets without changing them (as opposed to it being AI generation from scratch). There are arguments about that too, as looking at the example screenshots shared, some people just don't believe the feature isn't messing with the graphics beyond merely applying fresh lighting. However, as Wccftech reports, a veteran game artist has made it clear how changing lighting radically can alter the look of a character more than you'd imagine. The same artist observed that 'most' of those raining down flames on DLSS 5 really don't know what they're talking about in terms of how the technology works. That said, others in the industry, particularly certain devs, have come down hard on what Nvidia's doing here -- and pointed to a disconnect between Team Green and some developers. As we stand here, right now, we have to hope that Nvidia's promises about the level of control that developers should be able to exercise over DLSS 5 will come good -- and that the final implementation of the tech will be quite different to the early sampling we had at GTC 2026. And here's where the history lesson comes in. Remember when DLSS was first introduced? It was roundly pilloried for the blurriness and glitches which were the baggage that came with the initial incarnation, and many gamers rebelled and felt that the frame rate increase was not worth this graphical trade-off. DLSS 1 took a lot of serious flak on this front, but by DLSS 2 - when Nvidia brought in temporal (not spatial) upscaling - Team Green fixed those issues, and gamers flocked to the tech. Still rewinding the DLSS tape, just not as far, remember when frame generation was first tabled with DLSS 3? That particular motion from Nvidia was largely rejected and the whole 'fake frames' controversy erupted -- and while that's still a catchphrase floating around online forums today, the overall view of frame gen has changed radically. Nvidia improved frame generation considerably with DLSS 3.5, and today, it's regarded as a good thing (TM), albeit with caveats (naturally) about how far you can push this AI trickery. The likelihood, then, is that even if DLSS 5 is shot down upon its initial release, Nvidia will forge on with the feature and get it right. That might take until the next incarnation - DLSS 5.5, or DLSS 6, perhaps on RTX 6000 GPUs - but odds are it'll happen. I think people tend to forget that plenty of gamers were sure the original DLSS was a 'dumpster fire' that wasn't going anywhere, and they turned out to be very wrong. I most certainly wouldn't write off DLSS 5 yet. Getting photoreal All that said, I'm aware that DLSS 5 is a very different angle to what Nvidia has previously done. The technology is not about faster frame rates and smoother gaming - whether via upscaling, or artificially generated frames - the premise of this latest take on Nvidia's tech is making games photorealistic. And there's another question that keeps popping up therein: do we actually want photo-realism for our gaming? Maybe not. Certainly some folks are vehemently opposed to the drive for photorealistic graphics. They want style and character in their gaming visuals, not hyper-realism. This is where we get into very subjective territory, mind. Of course, ultimately developers don't have to use DLSS 5. And even if they do, gamers don't have to enable it. Although even those who swear off the tech totally, and will never use DLSS 5, will still feel irritation at the path Nvidia is taking here. Mainly because they'll doubtless wonder what the resources being 'squandered' on traveling down this road could have achieved if turned towards what could be perceived as more productive ends. Nvidia has a tough battle ahead to gain acceptance for DLSS 5, that's clear enough, but I wouldn't underestimate Team Green given what's happened in the past. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course, you can also follow TechRadar on YouTube and TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
[35]
Nvidia is reskinning games with AI. Gamers are angry about it, and wrong
Nvidia has unveiled DLSS 5, a new PC gaming technology that uses AI to re-render video games in real time. It's basically a make-it-realistic filter, affecting characters, foliage, textures, and lighting. It's but another example of how in the age of AI, the world may never be the same. And the gaming community doesn't quite know what to think yet. While the previous versions of DLSS simply upscaled a game's resolution using AI, this version turns a tree that looks like a 3D model into a tree that looks like a real tree. It's a monumental change. And a bold move. Unsurprisingly, the gaming community is fiercely divided. While some embrace the leap in visual fidelity, a loud contingent of hardcore players is furious, claiming it destroys artistic intent. The latter camp claims it turns games into AI slop, the derogatory term that everyone loves to use now, whether it's accurate or not. It used to mean poor-quality AI-generated images or video, but that meaning has been lost, turning into the 2026 version of "It's Photoshop!" and "It's CGI!" whining of yesteryear.
[36]
Why everyone hates NVIDIA DLSS 5 (but will love it eventually)
From "GPT Moment" to 2D filters, here's why DLSS 5 is currently the internet's favorite punching bag Upscaling, or reconstructing frames for video games in real time, is a pretty controversial practice. Pursists balk at the idea, but users with a "weak" or mid-tier gaming system appreciate the extra fluidity that comes with it. NVIDIA does it. So does AMD. And Intel, too. But all hell broke loose when Nvidia announced the next iteration of its super-sampling tech, particularly owing to the excessively AI-fied look of the visuals, especially human faces. It's been a wild few weeks in the tech world, and if you've been following the DLSS 5 (Deep Learning Super Sampling) saga, you know it's been a rollercoaster of "Wow," "Wait, what?", and "Get that thing away from my game." Here is the breakdown of the DLSS 5 drama, from the leather-jacket-clad hype to the current "2D filter" reality. The Story So Far: The "GPT Moment" That Wasn't It all started when Jensen Huang took the stage at NVIDIA's GTC 2026 and dropped the bombshell: DLSS 5. NVIDIA wasn't just upscaling pixels anymore; they were generatively reimagining them. Jensen called it the "GPT moment for graphics," promising that AI would now handle the heavy lifting of visual realism: things like skin texture, fabric sheen, and complex lighting. Unfortunately, the hype didn't even last 24 hours. Within a few hours, the internet was flooded with side-by-side comparisons of Resident Evil Requiem and Starfield. The community's response? "AI Slop." Instead of making games look "better," DLSS 5 was "Yassifying" characters by smoothing out gritty skin textures, adding unintended makeup, and making everyone look like an Instagram influencer from 2022. Then came the "Betrayal." As reported by Insider Gaming, major game developers were blindsided. Artists at Ubisoft and Capcom reportedly found out about the DLSS 5 demos at the same time we did. NVIDIA scrambled with damage control, promising a "Full Creative Control" SDK with intensity sliders. But the final blow came just days ago: An email interview between YouTuber Daniel Owen and NVIDIA's Jacob Freeman revealed that DLSS 5 isn't actually tapping into the deep 3D geometry of the game. It's essentially a high-end 2D post-processing filter being laid over the screen. The "Neural Revolution" turned out to be a very expensive coat of paint. Why "Better" Isn't Always Better On paper, DLSS 5 sounds like magic. And in some ways, it is. If you look at a landscape or a static environment, the AI-infused shadows and highlights look objectively "cleaner." But here's the problem: cleaner isn't always the vibe. Recommended Videos Video games are art, and art is about intention. If a developer spends three years perfecting a hazy, moody, claustrophobic hallway in a horror game, they don't want an AI coming in and "fixing" it. DLSS 5 has a habit of brightening up dark corners and scrubbing away atmospheric fog because it thinks those are "errors" to be corrected. The fact that developers were surprised by the demo is the biggest red flag. It's classic corporate hierarchy: the suits at the top say "Yes" to NVIDIA for the marketing buzz, while the actual creative teams are left in the dark. Instead, if NVIDIA had actually collaborated with the artists, it could have fed the AI 3D data models and blueprints. Imagine if the AI knew exactly where a character's scar was supposed to be, or how a specific fabric was meant to reflect light. In fact, as Veedrac on Reddit recently showcased, games that have DLSS 5 with tone-mapping actually look stunning. It proved that the tech can work, but only when a human is steering the ship. By launching it as a "black box" filter, NVIDIA basically bypassed the very people who make games worth playing. Then again, there is the elephant in the room: Data Sovereignty. As a creative designer, why would I be okay with handing over my raw character designs and lighting maps to an AI model? We've seen how this works. The AI uses that data to "learn," and eventually, it's building things based on your hard work without you in the loop. It's a valid fear that NVIDIA is building a master engine that might one day make the "Artist" part of "Game Artist" optional. The Future Awaits Is DLSS 5 dead on arrival? Probably not. If history tells us anything, this is just NVIDIA's standard operating procedure: break things first, fix them later. Look back at 2018: Ray Tracing launched, tanked our frame rates, and looked "fine" at best. Today? It's the gold standard. In 2022, they gave us Frame Generation, and we all laughed at the "fake frames." Now? It's practically the only way to hit a playable 4K. Don't get me wrong, I'd genuinely take raw, native rasterization over this AI mess any day. I want my games rendered for real, without the digital shortcuts. But that's just not the world we live in. NVIDIA owns 95% of the market, as reported by Jon Peddie Research, which means whatever they introduce, be it good, bad, or ugly, eventually becomes the industry blueprint. Right now, DLSS 5 is stuck in its "Uncanny Valley" phase. It's awkward, over-aggressive, and currently getting slandered for being a glorified 2D filter. But eventually, NVIDIA will have to realize they can't treat a game like a flat video file. That promised SDK needs to be more than just a slider; it needs to be a bridge that lets developers lock in their artistic soul. Once DLSS 5 learns to respect the "mood" as much as the "pixels," it will change gaming forever. And we know how this ends: the industry follows NVIDIA like clockwork. We can bitch all we want today, but in two years, we'll probably be debating whether AMD's "FSR 5" is as good at "re-painting" characters as Team Green. The tech is inevitable. We just have to make sure the art doesn't get lost in the upscale.
[37]
Nvidia CEO Says Gamers Are Completely Wrong About His New AI Feature That Yassifies Games
Think Nvidia's new feature that slaps an AI filter onto your favorite games looks like garbage? Well, the company's CEO Jensen Huang says you're "completely wrong," Tom's Hardware reports. On Monday, the multitrillion dollar gaming hardware and AI chipmaker announced a new AI-powered software feature called DLSS 5, which immediately drew widespread criticism. A dramatic step-up from previous iterations of DLSS which focused on upscaling graphics, the latest version used a generative AI model "to infuse the scene with photoreal lighting and materials that are anchored to source 3D content." Gamers blasted a demo video shared by Nvidia, which showed snippets from games like the Resident Evil franchise being overlaid with a familiar AI sheen. There was an offputting element of Facetuning, with characters like Resident Evil's Grace Ashcroft, a blonde woman, looking like they were straight-up yassified with trendily hollower cheeks and poutier lips. Many argued that the AI feature undermined artistic intent and was yet another example of AI slop. Some even called it "sloptracing," a play on Nvidia's ray tracing tech. Huang emphatically disagrees with these characterizations. "Well, first of all, they're completely wrong," Huang told Tom's at the publication's GTC 2026 event. "The reason for that is because, as I have explained very carefully, DLSS 5 fuses controllability of the geometry and textures and everything about the game with generative AI." In the initial announcement, Huang called DLSS 5 the "GPT moment for graphics," and insisted its use would still be "preserving the control artists need for creative expression." Given the striking changes to some character's faces and even the scenery, many had a hard time buying that promise. Yet in his response to the backlash, Huang has doubled down by emphasizing that DLSS 5 "doesn't change the artistic control," saying developers can still "fine-tune the generative AI" to match their style. "It's not post-processing, it's not post-processing at the frame level, it's generative control at the geometry level," he insisted, in a jargon-filled rant. And should developers want to, he says, they could make even more dramatic changes to their games' aesthetic with the AI feature like seeing if they can create a "toon shader" or make a game look like it was "made of glass." "All of that is in the control -- direct control -- of the game developer," Haung said. "This is very different than generative AI; it's content-control generative AI. That's why we call it neural rendering." In sum: it's not generative AI. It's... generative AI? We're not convinced that it's an argument that will sway gamers, and calling your customers "wrong" is certainly a choice. Still, it's merely the latest example of Huang's perfervid zeal for AI (it didn't become the most valuable company the in the world just by selling graphics cards to gamers, after all). Late last year, he reportedly torched his managers who told employees to hold back on using AI, because in his view, you're "insane" if you don't use AI for literally every possible task.
[38]
Despite ridicule, Nvidia CEO says gamers are 'completely wrong' about its controversial AI tech
The internet has spent the last couple of days completely blasting DLSS 5, Nvidia's fancy new graphics technology that inadvertently yassifies video games. The AI company wanted DLSS 5 to be seen as the next frontier in video game visuals. Instead, it's become a meme that keeps getting compared to "AI slop." A negative public reaction like this would cause a PR campaign overhaul at most companies. But rather than admitting DLSS 5 has missed the mark, Nvidia is doubling down on its current message. DLSS 5 is a feature of Nvidia's upcoming RTX-50 line of graphics cards. The technology uses machine learning and "neural rendering" to boost the lighting of games that support it. At least, that's the idea. Footage displaying the before and after for DLSS 5 has whipped up a storm of controversy on the internet, in large part because the technology seems to alter character designs. When it's active on inanimate objects or elements like metal and water, DLSS 5 shines. On humans, though, it's completely uncanny. It's like turning on a beauty filter on TikTok. Nvidia maintains that DLSS 5 does not actually alter the games themselves. Things like textures and renders stay the same whether the graphics tech is on or off, the company claims. Nvidia has also pointed out that game developers are opting in to use its technology, and that in many cases, these studios insist that DLSS 5 is actually helping them get closer to the developer's original vision. It seems as if Nvidia considers the reaction to DLSS as a matter of misunderstanding facts. On social media, for example, Nvidia has clarified that DLSS is not actually a filter. A couple of days after the DLSS reveal, Nvidia CEO Jensen Huang officially responded to the criticisms. Per an interview with Tom's Hardware, Huang came out of the gate saying, "Well, first of all, they're completely wrong." Huang then goes on to repeat Nvidia's talking points about creative control and the nature of the technology itself. "It's not post-processing, it's not post-processing at the frame level, it's generative control at the geometry level [...] This is very different than generative AI; it's content-control generative AI," Huang says. "That's why we call it neural rendering." The thing that suggests Nvidia misunderstands the situation is that Huang kicks off his response with the phrase, "as I have explained very carefully." And sure, some people might not understand exactly what the DLSS tech is doing on a mechanical level -- but arguing about the definition of a filter seems like a technicality. Ultimately, regardless of how Nvidia achieves its output, gamers are taking umbrage with the results. When gamers are used to seeing iconic video game characters like Resident Evil's Leon Kennedy in specific ways, they're going to have opinions about how he's depicted. If it seems like something has changed, you can't really debate someone to make them feel less weird about it. And on some level, it seems like the public has grown weary of the pervasive way AI is used to depict "realistic" visuals. The aesthetic is stigmatized now. For some, the type of hyperrealism that AI-fueled tech defaults to betrays a lack of imagination. The public isn't wrong about DLSS 5 -- they just don't like it.
[39]
Jensen Huang Softens Tone on DLSS 5 Criticism, Defends AI Role
NVIDIA CEO Jensen Huang has responded to criticism surrounding DLSS 5, adopting a more measured and understanding tone following earlier remarks that dismissed negative feedback. Speaking in a recent interview, Huang acknowledged that concerns about AI-driven visuals -- often described as "AI slop" -- are not without merit, particularly given the increasing similarity seen in some generative AI outputs. Despite this, Huang continues to defend DLSS 5, emphasizing that the technology is fundamentally different from general-purpose generative AI. According to NVIDIA, DLSS 5 operates within constraints defined by developers, preserving geometry and artistic intent rather than altering core content. The system is described as a form of "content-controlled generative AI," positioned between traditional upscaling and more advanced AI-assisted rendering techniques. The controversy stems from early impressions that DLSS 5 could function as a post-processing layer that modifies final image output. Huang rejected this interpretation, clarifying that the technology is integrated earlier in the rendering pipeline and works in coordination with the original assets created by artists. This distinction is central to NVIDIA's argument that DLSS 5 enhances rather than replaces artistic direction. NVIDIA has also hinted at future capabilities, including the possibility of applying stylistic adjustments through guided inputs or prompts. Such features could allow developers to experiment with different visual styles while maintaining consistency with the intended design. However, the company has not yet disclosed full technical details, leaving some aspects of DLSS 5 unclear ahead of its release. Huang further positioned DLSS 5 as an optional tool rather than a requirement, comparing it to previous rendering advancements such as improved shader technologies. Developers can choose whether or not to implement it, reinforcing the idea that DLSS remains part of a broader toolkit rather than a defining feature of modern game development. As DLSS 5 approaches release, NVIDIA faces the challenge of balancing innovation with transparency. While the company continues to highlight the benefits of AI-assisted rendering, community skepticism remains, particularly around how much control developers -- and ultimately players -- will have over the final visual output.
[40]
Nvidia's DLSS 5 isn't a tool. It's an invasion
But games are art, and art has purpose. If the GPU simply generates AI-generated content that neither the user nor developer asked for, doesn't that detract from the experience? At that point, you have to ask yourself: what's the dividing line between AI content, art, and slop that's merely being forced down your throat? DLSS 5 isn't really DLSS at all What we now consider "AI" began as generative AI art, where users asked services like Midjourney to produce computer-generated images via descriptive prompts. The results aren't "art" in the traditional sense, yet the output is still technically impressive. I've never looked at AI art as something to value. My home's walls are full of art that we've bought from real local artists, not drawn by a computer -- but I can still appreciate the way AI breaks down and analyzes writing in the same way that noir borrows heavily from icons like Dashiell Hammett, Hitchcock, and The Big Sleep. I've always appreciated the technical ability of generative AI to create images, but I always understood I wasn't creating "art." I was commissioning content. In the meantime, of course, "AI" evolved into actual tools, like command-line instructions via Claude Code and various features within Adobe Photoshop. Now, even Nvidia uses it. But as a number of my colleagues pointed out on the most recent The Full Nerd podcast, Nvidia's first mistake was charactering DLSS 5 as a tool. It's not. While Nvidia's Deep Learning Super Sampling (DLSS) feature is synonymous with performance improvements. You might not care how DLSS features like upscaling and frame generation work, but with those the AI is designed to make games feel smoother ("fake frames" or not). But with DLSS 5, that's not the case at all. Instead, DLSS 5 is merely a visual enhancement. Nvidia seems to want us to appreciate DLSS 5 with all the technical admiration we'd have for a generative AI art service like Midjourney, Udio, or Runway, but also to think of it as a practical, useful tool. It's neither. Nvidia's examples suggest that DLSS 5 adds additional detail via generative AI where the original rendered graphics either suggested it or left it out altogether. In reality, the early demonstration -- and yes, it's just a demonstration -- have added an "uncanny valley" commonality to familiar video game characters, prompting calls of "AI slop." And those are just the examples Nvidia supplied. Could Far Cry 3's Vaas end up with dimples? What about Darth Vader with rouged cheeks or lipstick? Shao Khan with dyed hair? AI can make mistakes, we're reminded. Maybe that's hyperbole... or maybe it's not. Nvidia's CEO Jensen Huang has countered that users have gotten it completely wrong and that game developers remain in control of all of the creative elements they're used to. PCWorld's Adam Patrick Murray, who saw the DLSS 5 demo first-hand, also seems convinced that we're all wrong, and that the additional AI lighting and textures won't detract at all -- possibly the opposite, in fact. That still doesn't answer the question of who exactly benefits from DLSS 5 being turned on in the first place. And whose fault is it if something visually glitches, especially if that glitch varies by PC?
[41]
Eternal Darkness dev Denis Dyack says DLSS 5 reveal was a mistake
Over the past decade or so, few technologies have been as polarizing as DLSS 5 has been since its announcement. It has only been a handful of days since Jensen Huang and his iconic leather jacket took to the stage to unveil the 5th iteration of Deep Learning Super Sampling, this time with a huge helping of AI for good measure. Ever since the keynote, the gaming industry has been struggling to wrap its collective head around the technology. NVIDIA has since responded to criticism of the technology, saying it is NOT an AI slop filter and that gamers are mistaken. Now, the industry experts are providing much-needed insight into the game development process and how DLSS 5 may impact it. In a recent interview with Wccftech, Eternal Darkness developer Denis Dyack had this to say about the technology. "The recent reveal of NVIDIA DLSS 5 was a mistake and needs to go back to the drawing board. The current release seems to go beyond enhancing the look of a video game to fundamentally changing its art direction. Never mind the artifacting and other AI art issues. The AAA industry is already in trouble, as it has become very difficult to justify production costs. Making things look spectacular is AAA games' greatest advantage over smaller budget games. If DLSS 5 is widely adopted, it will accelerate the AAA process's extinction, as it takes away the awe of what high-production art can bring to the table." - Denis Dyack to Wccftech What's interesting about Dyack's words is that he mentions the severe impact that this AI "filter" may have on AAA games. Photorealism is one of the key selling points of big-budget AAA games in 2026, and with the power of DLSS 5, that feature is being commoditized and made available to most games. Given the power of DLSS 5, it would be difficult for developers to invest time and resources in making a photorealistic AAA game in 2026, knowing that a similar result could be achieved in games with much smaller budgets using DLSS 5. Denis Dyack also says that NVIDIA was hasty in launching DLSS 5 and that a more cautious approach would have been more beneficial. This would have given the AI technology time to mature, while also giving NVIDIA plenty of time to market the package in a more appetizing way.
[42]
Nvidia DLSS 5 reveal: PC Gamer reacts... not wholly positively
This has been one of the most controversial reveals of an Nvidia technology I can remember, and people have a lot of thoughts. At the Nvidia GTC 2026 keynote, leather-clad CEO, Jen-Hsun Huang introduced an almost entirely context-free sizzle reel for its new DLSS 5 technology, coming to GPUs near you this Fall. And the reaction has been... probably not what Nvidia expected. In some quarters it's seen as transformative, in others it's seen as transformative. On the one hand it has the potential to deliver a new level of photorealism into PC games this year, but on the other it has the more obvious potential to homogenise game graphics, and most especially characters, way beyond developers and artists' original intentions. Exhibit A: Grace Ashcroft. The discourse, on the whole, has certainly not been positive. And we certainly have opinions... Ah, what a mess. While I'm not completely opposed to the idea of generative AI being injected into my games in new and interesting ways, I couldn't help but react in horror as to what Nvidia's new tech can do to character faces. I'm sure this is a personal taste thing, but the fact that Grace Ashcroft appears to have been Instagram-filtered into a completely different person is genuinely worrying. Nvidia seems keen to point out that developers will have control over just how far the AI will go in terms of sprucing things up, but turning the tech up to 11 from the get-go was always going to provoke an outcry. Not to mention the AI beauty standards angle. Plumped lips, heavy eye makeup, a chiselled "I've just had expensive surgery" jawline. It all feels a little... well, gross, if I'm honest. I do wonder what the original character artists think about what's been done to their carefully-crafted models, and whether they feel it's an improvement. And then there's the scene lighting overall. Everything seems to have been hit with a hefty dose of the contrast stick, with what looks suspiciously like our old friend bloom making an overt appearance. Again, personal taste will factor in here -- but while some areas look to be much improved, others seem to have been tweaked by a teenager experimenting with Photoshop sliders. It's not pleasant imagery to my eyes, and that was before I saw it in motion. Combine the plasticine, overly-shiny AI faces with a moving character model, and my own un-AI-ed visage starts to screw up in YouTube thumbnail-friendly fashion. I'm sure the technology will improve with time, but there's a horribly uncanny, morphing, shifting effect to the video we've been shown to date. Distracting? Yeah, something along those lines. If the demos had been tweaked to deliver more subtle results, I think most of us would be more curious about the tech. Unfortunately, hitting games with a massive dose of AI vaseline and dubious character cosmetic surgery has provoked quite the negative response. And I, for one, cannot help but agree. It's gaming, Jim, but not as we know it -- and a major misstep in terms of its presentation. Nvidia's botched DLSS 5 reveal is undoubtedly obscuring what could be a revolutionary shift in game rendering. That ray-tracing and path-tracing are computationally intensive is very well known. But what if you could essentially insert that kind of realistic lighting into a game without the need to brute-force all those light-bouncing calculations and instead do it with AI? That's essentially the idea behind DLSS 5. For now, it's really just an idea. Nvidia's DLSS 5 demo actually required a second RTX 5090 GPU running in parallel to support what is clearly a very hefty AI model. But it's early doors for DLSS 5 and if Nvidia can squeeze the model down into something that can run on mainstream GPUs, it could be genuinely revolutionary. In the meantime, all of that is being lost thanks to some, at best, heavy handed AI filters being applied to game character models, presumably because Nvidia wanted to insert some superficial visual impact, to make DLSS 5 look obviously different. That was a very bad call. But let's be clear, those AI filters are not what's potentially most interesting and important about DLSS 5. You could argue Nvidia's yassification of games is pretty much the same as modders trying to make Skyrim NPCs look like AI-generated porn -- I've already seen this argument so many times -- but you'd be wrong. This ignores the huge influence Nvidia has over gaming. Modding is still comparatively niche, and mods that so dramatically reinterpret a dev's vision even more so. When Nvidia pulls shit like this, it has a gargantuan impact. This is one of the biggest, most important companies in gaming saying "Hey, games should look like this", and what "this" is is a fucking nightmare. Uncanny, creepy and unnecessary, completely circumventing the artistic vision behind a game. It's the homogenisation of videogame art, and that sucks. Like so much AI-generated slop, it just looks terrible. Like some Twitter incel 'shopping a character to make them look hotter, this soulless AI filter just ends up making characters look like dead-eyed sex dolls or rubber-faced mannequins relegated to the darkest corners of Tussauds. Anyone who thinks Nvidia's demonstration enhances realism (ignoring the fact that most games are not attempting realism) needs to go outside for a minute and look at some real humans. Because this ain't it. Any studio executive or publisher who approves of this mess has completely sold out and shouldn't be making games. They should just quit and join one of countless companies doing the AI grift -- they're always looking for ethically stunted evangelists. I've got to second Fraser, here -- not only do I think this AI filter makes every single game look like it's been bound up in glossy, tasteless saran wrap, I also think it completely defeats the point in the first place. One thing AI bros don't get about art is that it's immensely complicated to make, especially at the kind of scale required for videogames. Take Resident Evil: Requiem, for example. Lighting has always been an art in horror, even when bound by technical limitations -- sometimes enhanced by them, as was the case with the iconic, conveniently draw-distance reducing fog in Silent Hill 2. And it's clear some very talented people have taken great care to produce a specific brand of gloominess in Requiem. Watching Nvidia puff out its chest to inspirational music as a slow wipe takes that careful work out back and shoots it dead is so typically devoid of understanding or taste that it hits me as parody. You have to really (and I mean, really) not be paying attention to anything you're watching or looking at to go "yep, this is great." See Leon stepping out of the car, his face cast in dark, oppressive shadow. Then watch the Nvidia slop filter make him glow like he's in Moulin Rouge! Isn't this better? Don't you like this? Look, it's got more graphics in it. You could make the argument that this sort of thing might be welcome in games that, say, aren't as "artistically dense". I'm aware that certain sports games get mechanically produced and shot out of their overlords' great machine minds on a yearly basis, so maybe this kind of tripe's more welcome there. Humbug, I say. Not only is that defeatist, and also harsh on the doubtless time-starved people trying to make those games look presentable on a wicked schedule, but it's also no great excuse. We're better than this, surely. If a game looks bad, I don't think it's worth whacking poor Grace Ashcroft with the dystopic Instagram filter (let alone the other ethical concerns with AI) just to let Starfield get away with looking a bit gray and uninteresting. I also don't find the idea that this sort of thing might be good in the hands of developers all that convincing, because here's the thing: Like a lot of AI grist, these tools are doing things that developers are, writ large, already very good at. They've been inventing their way past technical limitations for decades with design, not an amalgam of stolen work pumped into a grey goo filter. I'm kinda frustrated, a bit disappointed, but not entirely surprised at what the reaction has been around the DLSS 5 reveal last night. AI Grace is the thread that's running through a huge number of the complaints around the technology showcase. And if that's your first experience of what DLSS 5 is doing, I can completely understand the kneejerk antipathy that is prevalent. Capcom has got too excited with DLSS 5, turned all the AI controls up as far as they'll go and let the model go to town on what it thinks Ms. Ashcroft ought to look like. And it's not good, feels entirely like any other AI filter, and leaves you feeling a bit icky. I also get it if you are wholeheartedly against artificial intelligence in any of its myriad forms and want it kept as far away from your favourite hobby as possible. But I find it disappointing, but again not surprising, there's no nuance to the discussion. I've been shut down within the PC Gamer team for expressing any feelings of positivity around this nascent technology, but Resident Evil: Requiem isn't the only game presented here, we still don't know a ton about how it's used by devs, and you can already see within the other titles that were shown off where they have been more measured in their use of the tech. It's a day after the event, and I'm still processing what I saw, what we've been told, and now I really want to get my hands on it and talk to some of the people behind it before damning the whole enterprise to development hell. There are things in that presentation, after all, which I believe do look good. The environmental lighting on Assassin's Creed Shadows, and the effectiveness of the implementation in the Zorah tech demo, all speak to the different levels of usage that will be in the hands of the devs. Right now, I understand the fear, but don't buy the death of artistic expression argument, and until I'm shown that it is in fact just a binary on/off switch (which I don't believe it is), and that developers are going to be forced to use it, then I'm not going to be convinced that this will become something that will destroy artistry in games development. The invention of the camera was demonised by painters, and yet artistic expression flourished after the first shutter went click. Developers specifically chasing a certain art style will still do that, may just use DLSS 5 for the lighting, or for materials, or will probably eschew it altogether. Which will be a choice, as it will be for PC gamers. And I kinda like choice. But there are other issues, exemplified by Capcom's heavy handed usage. Devs are going to have to take some responsibility to avoid a period of complete homogeneity, as we had when they all got excited about bloom and rim lighting. Then there are other issues, such as the performance. Right now it is seriously computationally intensive, and we do not live in a space where computation is affordable or even accessible. If you're turning ray tracing off in today's games, you're going to be doing the same for DLSS 5. And then even calling it DLSS 5. This does not feel like what I understand to be super sampling. Though maybe that's the thing. I'm an old man who likes playing games, so maybe I just don't understand technology or art. Time to risk alienating the entire PC gaming community here -- but there has to be someone willing to bring some balance to the force. Look, I hate AI too, especially AI images and videos. But not every DLSS 5 video and image here looks bad. I get the reaction to the Resident Evil and Hogwarts Legacy demo, sure, but to my eyes, at least, what we've seen of Starfield, Fifa, Assassin's Creed, and Oblivion Remastered looks great. This variation only adds credence to Nvidia's claim that "game developers have full, detailed artistic control over DLSS 5's effects to ensure they maintain their game's unique aesthetic." Assuming this is true -- and given the variation shown, there's no reason to assume otherwise -- the only bad decision here was to throw a couple of poor examples that are over-adjusted towards an unrealistic, glossy aesthetic, at the forefront. That doesn't make for a good marketing recipe when combined with a public that's been rightly primed to sniff out and protest AI slop that many companies seem to keep championing. Now, as much as I enjoy the memes, I can see the case that the popular discussion is overlooking the things DLSS 5 does right, such as really making environmental lighting shine. However, I'd counter that the reason yassified Grace Ashcroft dominates the conversation is because of Nvidia's tone-deaf marketing. It is staggering to me that the way a company priding itself on keeping its finger to the pulse of tech would choose to showcase its snazzy new upscaling right out of the gate with gameplay footage that looks as though it's been run through a wholly unsubtle TikTok beauty filter -- and don't even get me started on the fact the live demos at GTC currently take two GPUs to do it. I can understand wanting to pull out all the stops to ensure the differences between DLSS 5 on and off are obvious to even a lay audience, but I think it's safe to say that tactic has backfired for Nvidia. Here's hoping DLSS 5 can make a more thoughtful, better implemented second impression later this year. There's a moment in Digital Foundry's video hands-on with DLSS 5 that, to me, communicates how thoroughly we've lost the plot somewhere along the way: The tech is toggled off and on while interacting with a wood elf in Oblivion Remastered, which produces the effect of watching as a character is suddenly wracked into a sleep paralysis demon garbed in the ill-fitting skin of a human man, each added wrinkle and manufactured plane traced with a searing, lacerating sharpness. "We've never seen elves look this realistic," the commentary says. "It's crazy." And I just want us to take a moment and ask ourselves: Is a photorealistic elf actually a better one? Are we more interested in videogames as art, or as tech demos? I'm posing the question because these early looks at DLSS 5, despite Nvidia's insistence to the contrary, demonstrate a fundamental contempt for artistic intent. Nvidia's wood elf yassification might produce an image with arguably higher visual fidelity, but it's completely changed the scene in the process. The lighting isn't improved. It's different. Color tones and intensities have been warped; characters have been retrofitted with entirely different affects; the basic sense of atmosphere has been reshaped according to the arbitrary priorities and intrinsic biases of the technology. When it's intimated that this is all worth it if it's in service of photorealism, I feel compelled to note that there's a lot of dogshit photography in the world. Higher fidelity is fine, but if it's overwriting the artistry behind the image, it's not worth the cost.
[43]
Bethesda promises Nvidia's controversial DLSS 5 AI filters will be "totally optional" for players
Bethesda will be "further adjusting the lighting and final effect" of Nvidia's DLSS 5 visuals on Starfield, following the controversial AI-tech's reveal. For those unaware, DLSS 5 is newly-announced AI-powered tech which Nvidia calls a "breakthrough in visual fidelity for games", and includes an optional upscaling filter which essentially 'beautifies' (I am using that term very loosely here) characters' faces and lighting. In the case of Resident Evil Requiem's Grace Ashcroft, the DLSS 5 filter appears to enlarge her lips and give her more makeup. Along with the most recent Resident Evil, another game used to showcase the tech was Starfield, with Bethesda's own Todd Howard calling the DLSS 5's effect on the game "amazing". That being said, Bethesda has heard the concern amongst its community since the reveal, and released another statement assuring fans that when DLSS 5 is being used in its games, it will be done so "under our artists' control", and be "totally optional" for players. "Appreciate your excitement and analysis of the new DLSS 5 lighting here," the Bethesda social media team wrote in response to a post highlighting the tech by Digital Foundry. "This is a very early look, and our art teams will be further adjusting the lighting and final effect to look the way we think works best for each game. "This will all be under our artists' control, and totally optional for players." Other responses to the post have been less diplomatic. "This is Nvidia taking a dump all over games as an art form," reads one such reply lambasting the use of AI to gloss over a creator's original work. "Games are art. This isn't". More broadly, AI remains an area of heated debate within the industry. While many developers have flirted with the technology, some have embraced it more than others, and last year, Epic Games boss Tim Sweeney said "AI will be involved in nearly all future production", so having Steam games disclose whether they were built with AI makes about as much sense as telling us what kind of shampoo the developers use. More recently, Arc Raiders developer Embark Studios revealed it had re-recorded some of its AI-generated voice lines, acknowledging "there is a quality difference" with lines recorded by human actors.
[44]
Nvidia's CEO goes full Principal Skinner in response to DLSS 5 backlash -- says it's the gamers who are 'completely wrong'
* Jensen Huang has called gamers who are hating on DLSS 5 'completely wrong' * The CEO noted: "This is very different than generative AI; it's content-control generative AI." * He further observed that game developers have direct control over the tech and that they can fine-tune the generative AI to match their artistic intent Nvidia's CEO Jensen Huang has returned fire at gamers who have been critical of DLSS 5, the freshly unveiled tech that aims to pep up the graphics of games to make them look more realistic with its RTX 5000 GPUs. Or at least that's the idea - using AI that "infuses pixels with photoreal lighting and materials" to polish up existing game assets - but many gamers feel the results look worse than the original graphics (for a variety of reasons). The criticism has been fierce from some quarters, but over at GTC 2026, Jensen Huang fired back at the detractors of DLSS 5 when questioned by Tom's Hardware. Huang pulled no punches out of the gate, saying: "Well, first of all, they're completely wrong. The reason for that is because, as I have explained very carefully, DLSS 5 fuses controllability of the geometry and textures and everything about the game with generative AI." The CEO elaborated on how developers can fine-tune the generative AI used here to match the game's style - and how it won't interfere with the artistic control or art direction of a game. Huang stressed how it's up to developers to use DLSS 5 as they want to, and that: "All of that is in the control - direct control - of the game developer. This is very different than generative AI; it's content-control generative AI. That's why we call it neural rendering." Analysis: not the right approach I'm not sure that doubling down on DLSS 5 and hitting back in this way is the right strategy here, and it feels a bit too much like grabbing for a can of gasoline when dealing with an inflamed crowd of disgruntled gamers. Also, Huang's words feel too much like playing not just with fire, but semantics for my liking. It's 'content-control generative AI' which is very different to 'generative AI', of course, as we're all aware. Tsk, it's worlds apart, even if it, erm, doesn't sound all that different on the face of it. What exactly is Huang talking about here? It's the difference between getting AI to generate graphics from scratch, and using AI to hone existing game assets - polishing what's already there (or 'content-control'). In addition to that, the CEO is also stressing that game developers will set the boundaries of how DLSS 5 is applied and maintain artistic control in that respect. That all sounds good in theory, but when we look at the results that Nvidia shared at GTC, with a number of screenshots showing DLSS 5 off versus DLSS 5 on in a variety of games, there are some startling differences. That's particularly true with the ambience and art style - you only need look at the Resident Evil Requiem screenshot (of Grace, see above) to see that. It's also understandable that based on the material shared, there are concerns of the tech making games look overly generic - too sharply rendered, and/or leaning towards a brightness overload or oversaturated colors. Given that these are fair observations, I don't think it's helpful for Huang to flat out call gamers 'wrong' in the way he does. I'm happy to accept that this is still very early work on DLSS 5, and the end game may look very different to what we're seeing in these glimpses of the tech at GTC - but this isn't what Huang is saying here. It feels to me like he's irritated at gamers for lashing out at DLSS 5 without fully considering what it is - or might be eventually, given that it's still in early preview - but that he's getting equally irritated himself and lashing back, which ultimately doesn't feel very constructive. It also reminds me (and many others) of the classic Simpsons meme, where Principal Skinner worries that he might be out of touch before blaming the kids for being wrong. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course, you can also follow TechRadar on YouTube and TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
[45]
Nvidia CEO Says He Hates AI Slop Too After DLSS 5 Panic
A week ago, Nvidia set fire to its gaming reputation with a DLSS 5 announcement that made the new generative AI tech look like an AI slop filter. Now CEO Jensen Huang is still trying to do damage control and put out the blaze. Originally he called angry gamers "completely wrong" about the freak out. In a new podcast interview, however, he struck a much more diplomatic tone. "I think their perspective makes sense and I can see where they're coming from, because I don't love AI slop myself," he told Lex Fridman in a new podcast episode published on March 23. "You know, all of the AI generated content increasingly looks similar and they're all beautiful and so I'm empathetic towards what they're thinking." But the man leading Nvidia's AI-fueled transformation into a $4 trillion company also continued to push back on the idea that DLSS 5 is just a slop filter with no regard for the underlying visual framework and artistry that it's using training data to remix.
[46]
Nvidia Irks Gamers With Bizarre New AI Filter. Here's Why Jensen Huang Says They're Wrong
Looking to expand in a new market, Nvidia revealed an upcoming feature that would change the artistic graphics of a video game with an AI filter. After swift pushback from gamers and critics, the company's CEO Jensen Huang is disputing the criticism. "They're completely wrong," Huang said of detractors Set for release in the fall, the company introduced DLSS 5, a new rendering model that adds "photoreal lighting and materials" to game environments. The company called it its most significant graphics breakthrough since 2018. "Bridging the divide between rendering and reality, DLSS 5 empowers game developers to deliver a new level of photoreal computer graphics previously only achieved in Hollywood visual effects," the company said in a release.
[47]
Nvidia and Bethesda clear the air on DLSS 5 making games look like "AI slop"
Early DLSS 5 demos sparked a backlash over the AI-altered visuals, but Nvidia and Bethesda say developers remain in full control and the effect is both adjustable and optional. Nvidia's newly announced DLSS 5 is already facing backlash, with some gamers calling its visuals "AI slop" that overrides a game's original art style. Now, Nvidia and Bethesda are stepping in to clarify how the tech actually works and who controls it. What exactly is DLSS 5, and why is it controversial? DLSS 5 is Nvidia's next-gen graphics tech that introduces neural rendering, which uses AI to enhance visuals in real time. Early demos show the technology dramatically altering the appearance of existing games, giving them a more photorealistic look that is a far cry from their original aesthetic. While the visuals may look appealing to some, others argue that the technology feels less like rendering and more like an AI filter being applied on top of games. Across Reddit and X, reactions have been sharp. Some users say the tech "paves over the original art direction" and makes games look homogenized, while others describe it as an "AI filter" that alters faces, lighting, and materials in a way that feels unnatural. There are also concerns about consistency, with users pointing out issues like facial features subtly changing or lighting appearing overly harsh or artificial. Some believe DLSS 5 could reduce the need for manual art direction, pushing games toward a generic, AI-generated look. The debate has also resulted in a string of memes on X, with side-by-side "DLSS on vs off" comparisons poking fun at how dramatically the tech alters a game's visuals. What do Nvidia and Bethesda have to say? In response, Nvidia has reassured that developers will retain "full, detailed artistic control over DLSS 5's effects to ensure they maintain their game's unique aesthetics. The SDK includes things like intensity, color grading and masking off places where the effect shouldn't be applied. It's not a filter - DLSS 5 inputs the game's color and motion vectors for each frame into the model, anchoring the output in the source 3D content." Bethesda echoed that stance, calling current demos an early look and confirming its art teams will continue refining the effect. The studio says the final implementation will be under artists' control and remain optional for players. Recommended Videos For now, DLSS 5 is off to a mixed start. While the tech promises better visuals, early reactions show players are wary of how much it changes a game's original look. How developers choose to use it will ultimately decide how it's received.
[48]
Nvidia Ridiculed for "Sloptracing" Feature That Uses AI to Yassify Video Games in Real Time
Can't-miss innovations from the bleeding edge of science and tech On Monday, the multi-trillion dollar AI chipmaker unveiled its latest effort at weaving advances in AI into video games, and it immediately backfired. The feature, DLSS 5, is supposed to be a souped-up version of the deep-learning upscaling tech Nvidia has offered since 2018. The company called it its "most significant breakthrough in computer graphics since the debut of real-time ray tracing" in that same year. But the reactions to demo footage shared has been overwhelmingly negative. Gamers and developers fumed against the announcement, calling it "slop" and a "betrayal" of games' artistic intent. Memes spread parodying the AI feature's garish aesthetic, in which an original character or person is contrasted with a "DLSS 5" image that shows the subject in an unrecognizable style. Some even gave it a harsh nickname: "sloptracing," a play on Nvidia's ray tracing tech. The reactions are warranted. Rather than just providing a little clarity to a fuzzy image, the feature looks more like a glorified Snapchat filter, varnishing the art style of your favorite games with an overwrought, generative AI finish. The effect is most noticeable when applied to faces. Iconic characters in the demo like Leon Kennedy from the Resident Evil franchise are, it's no exaggeration to say, literally yassified. According to Nvidia's announcement, DLSS 5 "introduces a real-time neural rendering model that infuses pixels with photoreal lighting and materials." It takes a "game's color and motion vectors for each frame as input, and uses an AI model to infuse the scene with photoreal lighting and materials that are anchored to source 3D content." This AI model, it says, "is trained end to end to understand complex scene semantics such as characters, hair, fabric and translucent skin." Nvidia chief Jensen Huang was effusive about the tech's implications, calling it gaming's "GPT moment." "DLSS 5 is the GPT moment for graphics -- blending hand-crafted rendering with generative AI to deliver a dramatic leap in visual realism while preserving the control artists need for creative expression," he said in the announcement. It's a little hard to buy Huang's promise of preserving creative expression, however, when in all of the examples shared, DLSS 5 dramatically alters the aesthetic of the games. More than that, it exemplifies how generative AI uniformly reinforces bland aesthetic norms and defaults to gooner beauty standards. (Grace Ashcroft from the upcoming Resident Evil game gets hollower cheeks, stronger cheekbones, and poutier lips.) The games no longer look like games, but like any other clip spat out by a video generating model that gets shared in AI circles with a caption like "Hollywood is cooked." Nvidia says DLSS 5 is arriving this fall -- but, it seems, only to participating games that will include Resident Evil Requiem, Starfield, Hogwarts Legacy, and Assassin's Creed Shadows. These are major titles, though, a show of how Nvidia says its feature is being supported by the industry's biggest publishers and developers, like Capcom, Bethesda, Ubisoft, and Warner Bros Games.
[49]
Nvidia unveils next-generation leap in yassifying technology
What innovations should a new generation of video games bring with it? Bigger worlds? Faster load times? GPU maker Nvidia has an answer that I bet you've never thought of: What if you paid thousands of dollars for a computer graphics chip that makes it seem like you've put a Snapchat beauty filter over everything? At Nvidia GTC Live 2026 on Monday, the AI-focused company unveiled DLSS 5, which Nvidia claims is "the most significant breakthrough in computer graphics since the debut of real-time ray tracing in 2018." Reactions to that tech have been... mixed. The tech wizards at Digital Foundry recently uploaded a video that breaks down Nvidia's DLSS 5 technology, which is exclusive to its upcoming RTX 50-series computer chips. It's full of comparison shots that show what games like Resident Evil Requiem and Starfield would look like with DLSS 5 turned on and off. The tech is powered by machine learning that Nvidia describes in a press release as "real-time neural rendering model that infuses pixels with photoreal lighting and materials." DLSS does not, however, alter any of the geometry or texture assets, according to Digital Foundry. You wouldn't know that from looking at the results, however. With DLSS, human character designs seem to transmute into someone else entirely. It almost seems like what you'd get after prompting an AI to come up with a realistic video game character. Everything comes out outrageous and yassified. Near the end of the video, Digital Foundry's Richard Leadbetter claims that Bethesda's Todd Howard personally signed off on Starfield's DLSS visuals. He paraphrases Howard, who apparently told Nvidia that this is how he wanted Starfield to be seen by the public. "Bethesda has such a rich history pushing graphics with Nvidia, going all the way back to Morrowind, with that incredible water," Howard said in a statement. "When Nvidia showed us DLSS 5 and we got it running in Starfield, it was amazing how it brought it to life. We've played it. We can't wait for all of you to do so as well." "I think it's fair to say it's one of the surprising, potentially disruptive, transformative next generation technologies we've seen," says Digital Foundry's Richard Leadbetter. That's one way of putting it. Most people seem to be clowning on the technology. The women at Capcom did not shed blood, sweat, and tears to gift us the hottest version of old Leon S. Kennedy just for him to get downgraded into something that looks like AI slop. "I haven't seen anything more uncanny ever in my life," one YouTube commenter writes. "[It] gave Grace completely different eyes and makeup," another opined. "Straight up detrimental to artistic intent." Digital Foundry is mostly complimentary about Nvidia's tech, especially when describing its effects on background objects. A lamp post, for example, is singled out for its convincing depiction of water on metal. According to Nvidia, the tech demo is still a work in progress. The company is expecting to implement changes and improvements. When DLSS 5 is out in the wild this fall, it will support games from developers like Bethesda, Capcom, NetEase, Tencent, Ubisoft, and Warner Bros., Nvidia says. If you somehow manage to buy the graphics card despite the shortages, don't be alarmed by what your favorite games look like. If Nvidia is to be believed, then apparently all the characters who seem uncanny under DLSS 5 were actually always meant to look like that. "On Assassin's Creed Shadows, it's letting us build the kind of worlds we've always wanted to," says Charlie Guillemot, co-CEO of Vantage Studios. That's saying... something.
[50]
Nvidia CEO Backs DLSS 5 Amid Community Backlash: Know the Timeline
Many called DLSS 5 an "AI slop filter" for its random visual enhancements Nvidia's Deep Learning Super Sampling or DLSS branding has, for years, stood for one thing in PC gaming: squeezing more performance out of demanding games with the help of artificial intelligence (AI). DLSS 5 changes that conversation. Unveiled this week at Nvidia GTC 2026, the new version is not being pitched simply as an upscaling tool or a frame-generation feature. Instead, Nvidia says DLSS 5 uses real-time neural rendering to change how light, materials, and surfaces appear on screen, pushing game visuals closer to what the company calls "photoreal" graphics. Ironically, this shift is also the reason why DLSS 5 has become one of the most debated gaming announcements in recent years. Within hours of Nvidia's demos going live, social media users began trolling the feature, arguing that it made characters look uncanny, overly airbrushed, or similar to the kind of AI beauty filters already common in photo apps. Critics also raised a broader concern that if the technology is altering how a game looks beyond resolution and frame rate, where does that leave the original artistic direction set by developers? Here are five things you need to know about the entire saga. What Is Nvidia DLSS 5? In a newsroom post, Nvidia described DLSS 5 as a "real-time neural rendering model" that adds photoreal lighting and materials to scenes. Put simply, the company is using AI not just to reconstruct or generate frames, but to infer how surfaces, objects, and characters should look under more realistic lighting conditions while the game is running. Nvidia says the goal is to bridge the gap between conventional game rendering and Hollywood-style cinematic visual effects. This also marks a shift in the strategy for the technology suite. Previous DLSS iterations focused mainly on AI-led graphics upscaling, ray reconstruction, and frame generation. DLSS 5, by contrast, appears to operate more like a neural rendering layer on top of the existing image. Nvidia demonstrated it with titles including Resident Evil Requiem, Starfield, Hogwarts Legacy, and EA Sports FC, to highlight its capabilities. It is set to arrive this fall. When Was DLSS 5 Launched? Nvidia unveiled DLSS 5 at the keynote session of its GTC 2026 event on March 16. The company did not launch the tech suite, but showcased a preview and shared more information about how it functions. A major focus was on the real-time neural rendering, which can save developers and publishers time and money. The timing is also relevant because DLSS 5 was announced while Nvidia is still expanding DLSS 4 support across more games. In other words, DLSS 5 is not replacing Nvidia's existing performance stack overnight. It is being introduced as the next layer in that stack, with a stronger focus on visual fidelity instead of performance uplift. Why Social Media Users Trolled DLSS 5 The backlash was largely driven by Nvidia's own demo footage. On social media platforms, users argued that the feature made characters look too polished, too symmetrical, or simply not like themselves. Some compared the output to motion smoothing on TVs, while others called it an "AI slop filter" for games. Many also began creating memes highlighting the egregious nature of the visual upscaling showcased in the demo. One particular example highlighted by netizens was the changes added to the face of Grace Ashcroft, the protagonist of Resident Evil Requiem. In the video, after the DLSS 5 effect was applied, the character was seen with full-face makeup, which looked ill-fitted for a horror survival game. Some of the criticism also centred on the "uncanny valley" effect. Users highlighted that some characters looked strangely beautified or softened in ways that felt synthetic rather than natural, and took them out of the immersion. Conversations online also focused on the concerning trend of AI being overly used in game development. Several industry veterans voiced their opinion about AI in gaming after the DLSS 5 demo release. Take-Two Interactive CEO Strauss Zelnick spoke in favour of human creativity over AI tools in a recent interview, adding, "While technology is constantly evolving, the basic building blocks of what makes an entertainment product successful have not changed." What the Nvidia CEO Said About the Backlash In the hours following the keynote session, Nvidia CEO Jensen Huang responded publicly to the online criticism. Speaking to Tom's Hardware, he said people criticising DLSS 5 were "completely wrong." Huang said the technology does not take artistic control away from developers, pushing back against the idea that Nvidia is overriding a game studio's creative intent. "The reason for that is because, as I have explained very carefully, DLSS 5 fuses the controllability of geometry and textures and everything about the game with generative AI. It's not post-processing, it's not post-processing at the frame level, it's generative control at the geometry level," he added. The Bigger Picture The online debates and the reception of DLSS 5 also highlights the wider perception of generative AI in the gaming industry. The space has become one of the biggest casualties of the disruptive technology. While on one end, the gaming space is dealing with the ongoing RAM shortage, on the other hand, AI-led automation has caused multiple layoffs as well. In these conditions, Nvidia's AI-powered solution that seemingly adds layers to gaming visuals that goes beyond the vision of developers was bound to be looked at through pessimistic lenses. However, judging the product with a single demo is short-sighted. Since DLSS 5 will not be launched for several months, it makes sense to wait and see the final version of the tool before causing an outage. If Huang's words are to be believed, the technology works in accordance with developers' vision, and not against it.
[51]
NVIDIA DLSS 5 Sparks Debate Over AI Rendering and Visual Control
NVIDIA's introduction of DLSS 5 at GTC 2026 has triggered significant discussion across the gaming and enthusiast community, particularly regarding its shift toward generative AI-driven rendering. Unlike previous DLSS iterations that primarily focused on upscaling and frame reconstruction, DLSS 5 introduces a deeper integration into the rendering pipeline, enabling AI to influence scene composition at the geometry level. The controversy largely stems from early demonstrations, including Resident Evil Requiem, where some users reported perceived deviations from the original visual style. Critics argued that the technology appeared to alter in-game elements beyond developer intent. In response, NVIDIA CEO Jensen Huang addressed the concerns, stating that DLSS 5 is being misinterpreted. According to Huang, the technology does not arbitrarily modify visuals but instead provides developers with granular control over how AI is applied. From a technical standpoint, DLSS 5 utilizes motion vectors, color buffers, and additional scene data to reconstruct and generate frames using neural models. This represents a shift from post-process upscaling to a more integrated approach where AI participates earlier in the rendering workflow. Developers can define how geometry, textures, and lighting are influenced, ensuring that the output aligns with the intended artistic direction. NVIDIA emphasizes that DLSS 5 is not a mandatory component of the rendering pipeline. Developers can choose the level of integration, and end users retain full control over enabling or disabling the feature. Traditional rendering methods remain intact, and earlier versions such as DLSS 4.5 continue to be supported. This ensures backward compatibility and provides flexibility for both developers and players. The company positions DLSS 5 as a complementary technology rather than a replacement. By combining conventional rendering techniques with generative AI, NVIDIA aims to expand the range of visual possibilities while maintaining performance scalability. However, the reception indicates that acceptance will depend heavily on how developers implement the technology and whether it preserves visual consistency across different titles.
[52]
DLSS 5 backlash: Nvidia's CEO says gamers are 'completely wrong'
In just a day, Nvidia's DLSS 5 technology has become the hot button for most of the PC and gaming world. Now Nvidia's chief executive has weighed in, claiming that everyone is "completely wrong" about the technology. At a question-and-answer session at Nvidia's own Game Technology Conference, Nvidia chief executive Jensen Huang said that "as I have explained very carefully, DLSS 5 fuses controllability of the of geometry and textures and everything about the game with generative AI," he said. Huang went on to say of the controversy: "They're completely wrong." Nvidia's DLSS 5 has sparked controversy because it essentially applies a generative AI filter to computer graphics. Nvidia describes DLSS 5 as a "real-time neural rendering model that infuses pixels with photoreal lighting and materials," and a "GPT moment for graphics -- blending hand-crafted rendering with generative AI". Many users hate it. Huang dropped the announcement of DLSS 5 into a a brief segment of an hours-long keynote address at the GTC conference in San Jose, most dealing with the cloud, AI, and powerful AI workstations. But it's the visuals that have sparked controversy, as they appear to almost apply a generative AI "skin" on top of the rendered game model. It's most noticeable with characters -- characters that are quite familiar to users. Even some of the small tweaks that the tool applied provoked online criticism of "AI slop." Hunag claimed that all of the generative AI capabilities will be subject to the control of the developer themselves. "It's not post-processing, it's not post-processing at the frame level, it's generative control at the geometry level," he told the audience. Huang's comments hearkened back to Apple's "Antennagate" controversy, where users who held the iPhone 4 could end up interfering with the phone's antennas and the signal reception. Apple chief executive Steve Jobs told users to avoid holding the iPhone in a way that the interference would occur, and the phrase "You're holding it wrong" -- though perhaps misattributed to Jobs -- became part of Apple lore. Unfortunately, Nvidia's casual introduction of the DLSS 5 technology, landing in the middle of an audience which has become extremely polarized about AI, means that Huang's statement will be brought up again until Nvidia finally starts explaining what DLSS 5 is and why customers and developers need it. DLSS 5 is expected to launch this fall. PCWorld staff discussed the DLSS 5 pros, cons, and backlash for over an hour on our Full Nerd podcast this afternoon. Check out our hot takes and level-headed nuanced takes alike in the video below.
[53]
Nvidia CEO says "I don't love AI slop myself" after giving Resident Evil Requiem's Grace a DLSS 5 makeover that was swiftly labelled AI slop
"It's about giving the artist the tool of AI, the tool of generative AI. They could decide not to use it" In what is definitely not an effort to save face after the week of ridicule that followed Nvidia's DLSS 5 makeover of games like Resident Evil Requiem, CEO Jensen Huang now says he doesn't like 'AI slop' either. Last week, Nvidia announced DLSS 5 - the next evolution of its AI-powered rendering technique - and, as a result, became the industry's main character for the next week. The tech examples showed off the likes of Starfield and Resident Evil Requiem with a yassified AI filter that effectively does the job of engagement-bait accounts on Twitter who post alternative AI slop versions whenever a female character is shown in a video game. Now, speaking to Lex Fridman, Huang - who previously said critics of the tech are "completely wrong" - changes his tune somewhat, saying, "I think their perspective makes sense and I can see where they're coming from, because I don't love AI slop myself." He adds, "You know, all of the AI-generated content increasingly looks similar, and they're all beautiful, and so I'm empathetic towards what they're thinking." He continues, "I think that [detractors] got the impression that the games are gonna come out the way the games are shipped the way they do, and then we're gonna post-process it," explaining, "DLSS is integrated with the artist, and so it's about giving the artist the tool of AI, the tool of generative AI. They could decide not to use it." The tech subsequently inspired ridicule from players, with 'DLSS 5 On' being the big meme of the month, while a number of developers also spoke against the tech, including a Death Stranding 2 developer who put it best: "No, no, no, no, no, no, no, no, no, no."
[54]
DLSS 5 only takes 2D rendered frames and motion vectors as input, not 3D game engine data, confirms NVIDIA
TL;DR: NVIDIA's DLSS 5 processes only 2D frames and motion vectors, lacking access to 3D geometry or PBR data, causing AI to alter character details and ignore artistic intent. Despite developer controls, the technology risks generic results and temporal artifacts, indicating significant improvements are needed before release. NVIDIA's latest DLSS 5 technology has been nothing if not controversial, with some hailing it as one of the most transformative leaps in real-time graphics, while others dismiss it as little more than 'AI slop'. In a conversation with YouTuber Daniel Owen, NVIDIA's Jacob Freeman has revealed a lot more about the specifics of the technology; details that would otherwise be veiled behind layers of marketing language. NVIDIA has now confirmed that DLSS 5 takes only a scene's 2D-rendered frame and its motion vectors as input. This stands in contrast to marketing claims that the technology is 'anchored to source 3D content,' a phrase that suggested a deeper understanding of the game engine. In reality, as the model sits at the end of the graphics pipeline, it only sees the 2D frame and remains blind to the 3D geometry of objects. Likewise, the model lacks access to PBR (Physically Based Rendering) properties provided by the engine. As a result, it is forced to infer what the material is supposed to look like rather than reading these properties directly from the game engine. This forces the model to rely on semantic labeling to identify clusters of pixels as eyes, cheeks, lips, etcetera. If the training data is biased toward perfect faces, the model risks reinterpreting or 'yassifying' a character's face to a generic standard, rather than preserving the developer's original intent. NVIDIA says, "the underlying geometry is unchanged," but the YouTuber showed a clear result in which the AI was caught generating, or rather hallucinating, hair and facial details that simply do not exist in the original character models. It seems like while the original 3D models are indeed preserved, DLSS 5 simply painted a new image over those pixels. This is likely a result of the AI's training data, where it 'decided' that a realistic version of that hairstyle required hair in those specific areas. The YouTuber also pointed out that, with DLSS 5 on, the character Grace Ashcroft from Resident Evil Requiem appeared with unintended makeup and altered facial features, completely ignoring the scene's grim context and the character's lore. When pressed on this loss of artistic intent, NVIDIA responded that developers will have access to an intensity slider to blend the AI's output with the original frame, color grading tools (gamma, saturation, and contrast), and the ability to exempt certain objects from the AI's generative pass completely. Still, developers cannot make the model context-aware or fine-tune the model to better fit their artistic style. To many, the technology, in its current state, feels less like a rendering revolution and more like a glorified 'Snapchat beauty filter' that masks a game's true depth and detail in favor of what the model deems the perfect photorealistic image. Furthermore, the presence of ghosting artifacts in NVIDIA's own promotional footage suggests that without a deeper, lower-level access and integration of the 3D scene, temporal stability remains challenging. As it stands, DLSS 5 certainly has room for improvement. If NVIDIA hopes to avoid being branded with the 'AI Slop' tag, it must address these concerns before the technology hits consumer GPUs this fall.
[55]
'If that was shown as a next-gen hardware reveal and not AI you guys would be going nuts': Epic Games lead producer says the idea DLSS 5 looks bad or detracts from art direction is 'absolutely insane'
Nvidia's DLSS 5 reveal has pulled in some very strong opinions. Since its unveiling, both Nvidia and Bethesda have clarified that developers will have creative freedom, but many developers have labelled it "slop" and "disrespectful to the intentional art direction of devs". One particularly strong voice in favour of the tech is the lead producer of Epic Games, Jean Pierre Kellams. As shared over on X, Kellams says, "All you guys roasting DLSS 5 like it doesn't look better/is detracting from art direction are absolutely insane." "The lighting and shading improvements are bonkers. If that was shown as a next-gen hardware reveal and not "AI" you guys would be going nuts like the Watch Dogs demo." If you don't know Kellams for his work on the likes of Fortnite, you may know him for his history with Platinum Games, working on the English adaptation of Bayonetta 2, The Wonderful 101, Devil May Cry, and more. Ex-Intel alum and former owner of PC Perspective, Ryan Shrout, said that Kellam's post "seems like the right answer to me", and they went hands-on with the tech yesterday. They argue it's "not a face filter" and "What I saw in the demos was a comprehensive improvement across the entire scene. And the moment that really drove this home wasn't a face. It was a coffee maker." Shrout says, "One of the things I came away most encouraged by is the developer control story. This is critical. If DLSS 5 were a black box that slapped a one-size-fits-all enhancement over every game, the artistic intent concerns would be completely valid. But that's not what this is." Interestingly, when a comment inevitably brings up the yassified Grace Ashcroft Nvidia showed off, Kellams argues, "Her skin shader has much better subsurface scattering (she doesn't have the Japanese game character perfect skin). Her lips actually have creases now. Her ear stud is now catching light properly." Kellams finalizes his point, saying, "If I was a technical artist, I'd be begging for this right now. It's essentially making super high resolution physically accurate lighting trivially cheap. If this is the demo, I can't wait until tech artists start really digging in." Part of his argument is that DLSS 5 could be resource-efficient and that "the technical trade offs that you have to make for performance to get that level of lighting is untenable." Notably, this is something Nvidia noted with its RTX 50 series launch. In a demo, showing off neural materials, it managed to bring down memory usage on fabric by a third, which certainly does bolster this opinion. It claims "Neural Texture Compression leverages neural networks accessed through neural shaders to compress and decompress material textures more efficiently than traditional methods." Notably, when discussing artistic intent, Kellams argues that you can't guess at intent until a director comes out and states as such, and that some of the tone or atmosphere from games are purely tradeoffs made for certain engines or models. He claims Nvidia's Resident Evil Requiem demo "has WAY WAY WAY better lighting" and a more realistic world. "The gloominess you became accustomed to is actually a feature of lighting tradeoffs that DLSS 5 is 'fixing'", Kellams states. Though the demo Shrout and the likes of Digital Foundry got access to was seen running through two RTX 5090s, it's worth noting that it will be at max 4K settings for demonstration purposes, and Nvidia has said the tech will be usable on a single GPU when it launches later this year. We'll have to get hands-on to find out the performance hit you will get in return for DLSS 5. But wide adoption of DLSS 5 will largely be about how consumers react to it, and Kellams thinks that ship has already sailed. "I get that some very vocal people don't like AI. But guess what. Technology doesn't care if you like it. It is a tool. AI isn't coming. It is here. Just this morning my oncologist was telling me all the ways it is helping cancer treatment and research."
[56]
Nvidia reveals DLSS's AI upscaled graphics filter, gets roundly mocked by pretty much everyone on the Internet
As concerns of a growing "AI bubble" continue to pile up while half the world is on fire, tech and gaming giants are trying their hardest to push AI-enabled "innovations". Nvidia is the latest company attempting to justify its massive investment in AI with DLSS 5, an "AI-Powered breakthrough in visual fidelity for games." Though most people learned about the new tech via Digital Foundry's in-depth hands-on preview, Nvidia also shared a look at the early footage (which you can watch below) that highlights how the new AI-powered DLSS version can add an optional filter which appears to "yassify" every living being it touches. If you were dreaming about the day when Bethesda NPCs could become photorealistic sleep paralysis demons, it seems that day has finally come. Also, Resident Evil Requiem co-lead Grace Ashcroft is victim of perhaps the worst implementation of this tech, as the DLSS 5 filter takes her face and seems to enlarge her lips, give her more make-up, and airbrush away what you can safely assume is the original vision of the developer. Somehow, the new filter even managed to make Leon S. Kennedy look bad too, which is quite the feat. "Twenty-five years after NVIDIA invented the programmable shader, we are reinventing computer graphics once again," said Jensen Huang. A big statement, to be sure. The biggest shock, post-reveal, comes in the form of glowing endorsements from big triple-A figures like Bethesda's Todd Howard, who (supposedly) "can't wait" for Starfield players to apply this weird, AI-flavoured sheen to their space adventures. Other notable reactions come from Capcom's Jun Takeuchi, executive producer and executive corporate officer, who reportedly said it's okay to overwrite the creative direction of developers to help players "become even more immersed in the world of Resident Evil." All of these statements feel completely detached from the public reaction to the tech, which has been unanimously critical. Tthe horrifying reveal instantly sparking a DLSS on/off meme format over on social media, which even some developers are embracing. "Make every game look like a bunch of artless idiots from Nexus Mods have had a go at it," former Eurogamer video legend Jim Trinca posted after taking a gander. The footage shared by Digital Foundry isn't free of visual oddities, of course, as DLSS 5 seems wildly inconsistent in how it applies its "enhancements." For example, here's some Oblivion NPC morphing its eyes mid-conversation. "What if shadows didn't exist," posted Restart writer Imran Khan while sharing a screenshot of the Assassin's Creed Shadows bit of the video that shows tree shadows getting almost completely removed in exchange for more unnatural ambient occlusion. The frustrating part of all this is that DLSS, in its basic form, has been proven to be a very useful technology to claw back performance while retaining visual fidelity through simple image upscaling. It's the tech which is allowing the Switch 2 to hit above its weight with ports like Resident Evil Requiem or Star Wars Outlaws. Moreover, recent frame generation extras can conjure up more fluidity out of thin air (as long as performance is already in a halfway-decent spot, at least). But if the next step for the tech is this nightmare-inducing, art style-bypassing mess, it seems DLSS tech has moved away from what initially made it great. At the time of writing, Nvidia is already running damage control in the comments section of the official reveal, claiming "game developers have full, detailed artistic control over DLSS 5's effects to ensure they maintain their game's unique aesthetic." But, for many, it seems the damage is already done: it's hard to imagine most gamers coming around on the very basic idea of this AI "upgrade."
[57]
Gamers recoil as Nvidia touts new graphics boost
A new graphics-boosting AI technology touted by chip giant Nvidia Monday as a "GPT moment" for gaming has been blasted by players denouncing the "slopification" of favourite titles. Nvidia has surged in recent years from a creator of graphics chips for gaming to the world's most valuable company, as its technology has proved crucial for powering generative artificial intelligence. A new graphics-boosting AI technology touted by chip giant Nvidia Monday as a "GPT moment" for gaming has been blasted by players denouncing the "slopification" of favourite titles. Nvidia has surged in recent years from a creator of graphics chips for gaming to the world's most valuable company, as its technology has proved crucial for powering generative artificial intelligence. That technology is set to be recycled into games in the upcoming generation of Nvidia's Deep Learning Super Sampling (DLSS) technology, which uses AI to improve graphics in real time, the company said at its annual developer conference in California's San Jose. Originally used to upscale images to higher resolutions, current versions pump up the quality and realism of visual elements like reflections, shadows and highlights. "DLSS 5 empowers game developers to deliver a new level of photoreal computer graphics previously only achieved in Hollywood visual effects," Nvidia said in a statement. It called the update set to arrive in autumn its "most significant breakthrough in computer graphics" since DLSS' creation in 2018. "DLSS 5 is the GPT moment for graphics -- blending hand-crafted rendering with generative AI to deliver a dramatic leap in visual realism," chief executive Jensen Huang said. The resource-intensive tech will only work with Nvidia's most powerful and expensive graphics cards. A minute-long video published by Nvidia showed before and after shots of several popular games that could benefit from the technology, including "Resident Evil Requiem", "Starfield" and Harry Potter tie-in "Hogwarts Legacy". The images showed characters modified with more dramatic lighting and intense colour to produce a near-photorealistic look, but also with details changed such as fuller lips and larger eyes on female faces. Many online commenters quickly recoiled, lumping the supposed graphics upgrade in with low-effort "AI slop" images that infest social networks. "Now your game can look like an AI-generated image, wow!," one viewer posted on Nvidia's Youtube video, which accumuluated almost 11,000 mostly negative comments in 14 hours. "Great, they turned DLSS into a TikTok filter," another wrote. Posters on X and Bluesky were deeply divided, with many warning that adding more AI into graphics would keep players from experiencing the visual art of games as developers intended. "Artists are rightly going to be pissed about this," gaming podcaster Will Smith wrote on Bluesky. "Yassifying (Resident Evil protagonist) Grace changes the tone and themes of this game," he added, referring to modifying women's images to make them stereotypically feminine or sexualised. In a response on Youtube, Nvidia said that "game developers have full, detailed artistic control over DLSS5's effects to ensure they maintain their game's unique aesthetic". "This is a very early look" at a technology that will be "under our artists' control" and "totally optional" for players, "Starfield" developer Bethesda posted on X.
[58]
Nvidia CEO's Defense Of DLSS 5 Gets Contradicted By One Of His Employees
New comments from a 'GeForce evangelist' suggest the much-hyped technology basically does just slap an AI filter over a 2D image Earlier this week, Nvidia unveiled DLSS 5: an “AI-powered breakthrough†in visual upscaling tech that takes a “game’s color and motion vectors as input for each frame, then infuses the scene with photoreal lighting and materials.†The internet immediately reacted very poorly to its announcement, decrying it as an AI-gen slop filter. Nvidia CEO Jensen Huang rejected that framing at a live event later in the week, saying everyone is “completely wrong†and DLSS 5 isn’t actually “post-processing at the frame level.†That would suggest a finer degree of nuance and control than the alleged "slop filter" that is modifying the final 2D image based on broad internet training data. But new details from Nvidia's own “GeForce Evangelist†marketing specialist Jacob Freeman appear to contradict Huang's framing of the controversial technology. PC gaming hardware YouTuber Daniel Owens asked Freeman if DLSS 5 is "effectively taking a single 2D frame as an input (with motion vectors) to create the output frame?" The Nvidia rep's response was: "Yes, DLSS5 takes a 2D frame plus motion vectors as an input." They continued, "DLSS 5 is trained end to end to understand complex scene semantics such as characters, hair, fabric, and translucent skin, along with environmental lighting conditions like front-lit, back-lit or overcast - all by analyzing a single frame." The less technically inclined among you may be asking what the gotcha is here. The issue is that this directly contradicts Jensen Huang’s previous statement on March 17. "It’s not post-processing, it’s not post-processing at the frame level, it’s generative control at the geometry level,†Huang said to Tom’s Hardware during a Q&A. “All of that is in the control â€" direct control â€" of the game developer. This is very different than generative AI; it’s content-control generative AI. That’s why we call it neural rendering." Basically, the NVIDIA employee is saying it’s a generative AI filter that uses a single image as a reference, and Huang is saying it’s not using a single frame as the reference; it’s using every aspect of the data, including the 3D geometry. In short, as Owens puts it, DLSS 5 is just using a screenshot and slapping a filter over top. This is why people online, already in backlash mode over the original demo, are now crying foul and accusing Huang of lying about DLSS 5’s capabilities in his most recent statement. It wouldn't be the first time he was accused of misleading consumers. It sounds like DLSS 5 isn't actually pulling any extra information beyond that. That also kind of explains why some of the lighting effects in the first demonstration look like garbage, because DLSS 5 is just using an image of the lighting and nothing else to generate something new. DLSS 5 isn’t some new-fangled geometry level rendering tech; it’s just AI slop 2.0, because it’s not doing anything a bog-standard generative AI filter doesn’t already do.
[59]
Nvidia's DLSS 5 is going viral for all the wrong reasons -- here are the 5 most controversial examples of the 'AI-powered breakthrough' in action
* Nvidia has previewed its DLSS 5 technology * The 'real-time neural rendering model' reworks lighting and adds realism * But many gamers aren't happy with the sometimes questionable results Nvidia's artificial intelligence (AI) gaming tools - including DLSS upscaling and its own frame-generation tech - have largely received a cautiously positive response from gamers, despite being occasionally divisive. While early criticism centered on "fake frames" and other AI-related blowback, that's started to give way as the tech has developed and gamers' experiences have improved with it. But with the launch of DLSS 5, Nvidia might have undone some of that hard work, and gamers are not happy. If you missed the news, Nvidia announced that DLSS 5 brings a "real-time neural rendering model that infuses pixels with photoreal lighting and materials". In other words, it uses AI to improve lighting and realism -- and that hasn't gone down well with many gamers, who've picked out examples where the results are, at best, questionable. Here, we've put together five of the clunkiest examples of "DLSS 5 off / DLSS 5 on" going wrong, from blown-out environments to faces that look through they've been passed through the Instagram filter from hell. They show that DLSS 5 sometimes might not be an "AI-powered breakthrough" that goes down well with everyone. 1. "Yassified, looks-maxed freaks" weird that this impressive lighting tech also randomly turns everyone into yassified, looks-maxed freaks. it's like all this technology can't help itself but sexualize everything it touches. but like, through the lens of teenage boys. www.youtube.com/watch?v=4Zlw... -- @dannyodwyer.bsky.social ( @dannyodwyer.bsky.social.bsky.social) 2026-03-17T16:12:23.132Z While many people have been sharing examples of DLSS 5 messing with character faces in their own gameplay moments, one of the most strident examples actually comes from Nvidia itself. In the company's official DLSS 5 press release, Nvidia pointed to its effect on Grace Ashcroft, the main character in Resident Evil Requiem. However, not everyone was impressed. Writing on Bluesky, for example, documentarian Danny O'Dwyer pointed to this illustration and said that DLSS 5 "randomly turns everyone into yassified, looks-maxed freaks. It's like all this technology can't help itself but sexualize everything it touches. But like, through the lens of teenage boys." Ouch. 2. Virgil van who? Even Van Dijk isn't safe from NVIDIA's DLSS5 filter. from r/LiverpoolFC When you play a sports series like EA Sports FC, part of the draw is the realistic renderings of all your favorite players. But with DLSS 5 mangling character appearances to an unprecedented degree, that's already at risk. As user Lynchead showed on the Liverpool FC subreddit, Virgil van Dijk - current Liverpool captain and one of the Premier League's best-known players - has been put through the ringer by DLSS 5, leaving his in-game visage completely unrecognizable compared to his real-life counterpart. Given the level of detail in EA Sports FC games, the transformation is shocking. 3. Aging up characters Everyone's all-in on the RE9 Grace one for good reason, but I can't get over the Hogwarts Legacy one. Not a game I care about, but here we have a less-detailed model that passes for a 15-year-old schoolkid (the character's age!), where the DLSS5 model detail-stuffs them to look a decade+ older -- @apzonerunner.com ( @apzonerunner.com.bsky.social) 2026-03-17T16:12:23.094Z Not only does DLSS 5 risk bulldozing over the realistic appearances created by game developers, but it could straight-up contradict key story elements that are conveyed through graphics. That can be seen in Hogwarts Legacy, as an example posted by video game critic Alex Donaldson shows. There, the face of a 15-year-old student has been given a much more detailed look, but the effect has been to increase their apparent age by such an extent that they look like a much older adult when using DLSS 5. As Donaldson said, DLSS 5 "detail-stuffs them to look a decade+ older." That contradiction with the in-game lore could be confusing for anyone playing the game. 4. Stripping depth and warmth from environments Another terrible aspect of DLSS5 few people talk about from r/digitalfoundry Not every critic had character faces in their sights. Over on Reddit, user Mediocre-Sundom noted what they described as "another terrible aspect" of DLSS 5: environmental lighting. "The depth, the contrast, the warmth of the lighting -- all of it is gone," they lamented after posting a couple of examples. And this isn't just a one-off, in their opinion: "The same can be seen pretty much in every other scene where DLSS 5 impact on the environment is demonstrated. It's way, waaaay worse in every single example," they said. 5. Overworked HDR comes to video games We all know DLSS 5 is really horrendous with faces, and this whole shitshow is not for no reason. But what are your impressions on environmental lightning? from r/digitalfoundry Another instance of DLSS 5's impact on environmental lighting was demonstrated by Filianore_ on Reddit, who posted screenshots from The Elder Scrolls IV: Oblivion Remastered. There, DLSS 5 has made the surroundings much brighter compared to the original image, creating a forced, unnatural look. As pointed out in the comments, the result is reminiscent of overly enthusiastic HDR effects that are sometimes applied to photographs. The DLSS 5 version looks so over-processed as to become almost unreal; while the original was not perfect, it at least had a more believable feel to it. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
[60]
'I Don't Love AI Slop Myself': Jensen Huang Is "Empathetic" Towards NVIDIA DLSS 5 Critics, But Still Says That is Not What The Tech is Doing
The controversy surrounding the reveal of NVIDIA DLSS 5 is showing no signs of cooling down. A good portion of the gaming community and plenty of developers have been vocally critical of what has been shown last week, and NVIDIA CEO Jensen Huang understands where the criticism is coming from, as he told Lex Fridman in a new interview that he doesn't love "AI Slop," which sounds ironic coming from one of the key individuals spearheading the AI revolution. "I think their perspective makes sense and I can see where they're coming from, because I don't love AI slop myself," Huang said when asked about the massive backlash after the new version of the upscaler was announced last week. "You know, all of the AI generated content increasingly looks similar and they're all beautiful. So, I'm empathetic towards what [critics] are thinking." While agreeing to some extent to the aversion pretty much everyone has towards AI slop, Jensen Huang has reiterated again how turning games into AI slop that not only look very similar but that takes away any artistic individuality is not what NVIDIA DLSS 5 does. "I showed several examples of it, but DLSS 5 is 3D conditioned, 3D guided. It's ground truth structure data guided. And so the artist determined the geometry, we are completely truthful to the geometry in every single frame," he said. With NVIDIA DLSS 5 closer to a toon shader that generates visuals based on the original art style and thus respectful of the artist's intention, Jensen Huang stressed that the new version of the tech will be just another tool for developers to use. While it is true game developers won't be forced to use it in any way, we have seen how NVIDIA DLSS Super Resolution and Frame Generation, and other similar tech from AMD and Intel, have essentially become a requirement for games. A requirement that, in some cases, is used to compensate for technical shortcomings, which is one of the reasons why many are against it in the first place. The discussion surrounding NVIDIA DLSS 5 was essentially dominated by the now infamous Resident Evil Requiem comparison which showed how protagonist Grace Ashcroft changed with the upcoming tech. This, and showing the tech in a way too early state, was one of the biggest mistakes NVIDIA made when making its announcement, according to our own Alessio Palumbo, who also had the chance to talk with multiple developers, such as Denis Dyack, and get their perspective about one of the most controversial new tech in gaming in a long time. Follow Wccftech on Google to get more of our news coverage in your feeds.
[61]
Nvidia CEO on DLSS 5: "I don't love AI slop myself"
Since its reveal, DLSS 5 has largely been mocked online for how it misinterprets the artistic vision of a game developer and puts a big AI filter over the top of it. Nvidia CEO Jensen Huang believes we're wrong to misjudge the technology so early, and has once again explained its purpose in a new interview. Speaking with Lex Fridman, Huang said that he can empathise with gamers and their position on DLSS 5. "I could see where they're coming from, because I don't love AI slop myself... all of the AI-generated content looks increasingly similar... I'm empathetic towards what they're thinking." However, Huang doesn't want us to just associate this tool with AI slop. "It's conditioned by the textures, the artistry of the artist. And so every single frame, it enhances, but it doesn't change." Essentially, again Huang argued it's up to the artist in how they want to use the tool. Also, it's apparently to their benefit. "All of that is done for the artist, so they can create something that is far more beautiful, but still in the style they want." Huang clarified DLSS 5 isn't just for post-processing existing images in games. Instead, it's about giving artists "the tool of AI." Whether that's something they asked for or wanted is out of the question, as it's there now.
[62]
DLSS 5 Neural Rendering Explained : How NVIDIA Changes Games
NVIDIA's DLSS 5 represents a significant leap in AI-driven graphics, introducing a neural rendering model that fundamentally changes how visuals are processed in gaming. By adding an AI-driven pass to the traditional rendering pipeline, DLSS 5 enhances elements like lighting, shadows and reflections, creating a more immersive experience. AI Grid's analysis provide more insights into how features such as subsurface scattering, which simulates light interacting with translucent materials like skin and fabric sheen, which captures the interplay of light on textiles, contribute to this heightened realism. These advancements are achieved without altering the core geometry or textures, making sure that the original artistic intent remains intact. Explore how DLSS 5 prioritizes visual fidelity over performance, a shift from earlier versions that focused on frame rates and upscaling. Gain insight into the hardware demands of this technology, including its optimal performance with dual RTX 5090 GPUs and consider the implications for accessibility. Additionally, the breakdown examines ethical concerns, such as the influence of AI on creative decisions and the potential for biases in neural rendering. These takeaways provide a comprehensive understanding of DLSS 5's role in reshaping the boundaries of gaming visuals and the broader conversations it sparks about AI's place in creative industries. DLSS 5 operates by integrating a sophisticated neural rendering model into the traditional rendering pipeline, fundamentally altering how visuals are processed. Unlike conventional methods that rely solely on game engine outputs, DLSS 5 introduces an AI-driven pass that refines the interaction of light with objects and materials. This additional layer of processing enhances key visual elements such as lighting, shadows and reflections, creating a more immersive and lifelike gaming experience. Key examples of DLSS 5's capabilities include: These enhancements are achieved without altering the underlying geometry or textures, making sure that the core artistic vision of the game remains intact while elevating its visual fidelity. DLSS 5 introduces several advanced features that distinguish it from earlier versions, pushing the boundaries of real-time rendering. Among its most notable innovations are: While these advancements highlight the potential of AI in gaming, they also underscore the trade-offs in terms of hardware demands. The steep requirements may limit accessibility for gamers with older systems, raising questions about inclusivity and the pace of technological adoption. Gain further expertise in NVIDIA by checking out these recommendations. DLSS 5 represents a significant shift in focus compared to its predecessors. Earlier iterations, such as DLSS Super Resolution and Frame Generation, prioritized performance enhancements, including upscaling lower-resolution images and improving frame rates. These features aimed to make high-quality gaming more accessible across a range of hardware configurations. In contrast, DLSS 5 places a stronger emphasis on achieving unparalleled visual realism. By prioritizing graphical fidelity over performance, it caters to gamers and developers seeking innovative visuals. However, this shift introduces challenges, such as increased hardware requirements and potential performance trade-offs, particularly for users with older or mid-range systems. This evolution reflects NVIDIA's broader strategy to push the boundaries of what is visually possible in gaming, even if it means narrowing the audience in the short term. Despite its technical achievements, DLSS 5 has sparked debates within the gaming community and beyond. Several concerns have emerged regarding its implementation and broader implications: These controversies highlight broader concerns about the role of AI in creative industries. As generative AI becomes more prevalent, some fear it could undermine traditional artistic processes by automating tasks that were once the domain of human creators. This tension underscores the need for thoughtful regulation and ethical guidelines to ensure AI serves as a tool for enhancing creativity rather than replacing it. Despite the controversies, DLSS 5 has been embraced by major game publishers, including Bethesda, Capcom, Ubisoft and Warner Bros. Over a dozen upcoming titles are confirmed to feature DLSS 5 integration, signaling strong industry interest in neural rendering technologies. This widespread adoption reflects a growing recognition of AI's potential to transform gaming, not only in terms of visuals but also in how games are developed and experienced. Looking ahead, the implications of DLSS 5 extend far beyond gaming. As neural rendering and generative AI continue to evolve, they could enable the creation of fully AI-generated environments, reshaping the landscape of game development. However, this potential also brings challenges, including the need for ethical considerations and a careful balance between innovation and artistic integrity. The future of gaming will likely depend on how these technologies are integrated and regulated, making sure they enhance rather than diminish the creative process. The reception to DLSS 5 has been mixed, reflecting the complexity of its impact on the gaming industry. On one hand, it has been celebrated for its ability to deliver new visual improvements, setting a new benchmark for realism in gaming. On the other hand, it has faced criticism for its steep hardware requirements, potential misuse and the broader ethical questions it raises about AI's role in creative industries. As a gamer or developer, DLSS 5 invites you to consider its broader implications. Will AI-driven graphics serve as a tool to enhance creativity, or will they blur the lines between human artistry and machine-generated content? These questions are central to the ongoing evolution of gaming technology, shaping how innovation and creativity intersect in the years to come. Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.
[63]
NVIDIA DLSS 5 Reveal Backfires as Gamers Mock "AI Slop"
Gamers are mocking it as "AI slop" upon looking at the generated frames using DLSS 5. NVIDIA is at the front and center of the gaming news once again - not because it's finally lowering the GPU prices (keep dreaming like we are). Nope. Rather, this time around, they find themselves in rather hot water with gamers after introducing their new "breakthrough" generative AI model DLSS 5, coming this fall. As gamers may know what DLSS is, to those who don't, it is an AI-powered model that creates entirely new frames to insert between rendered ones, smoothing gameplay, enhancing performance, and reducing VRAM usage for low-end GPUs. Imagine an RTX 2060 generating the quality of graphics you'd expect from a 3070Ti. Pretty neat, right? Gamers love it for better image quality, smoother framerates, and how it takes the heat off your GPU's back and lets it breathe for a bit. The ongoing DLSS 4.5 was launched earlier this year, and ever since I've been using it, it's helped me reduce ghosting and produce sharper, more detailed edges for my games. However, just as GDC 2026 wrapped, and NVIDIA announced its plans for GeForce NOW, they also dropped a surprise DLSS 5 unveiling with actual footage on how its new frame gen model looks, and it seems gamers have something to mock, yet again. Earlier today, NVIDIA unveiled a first look at its new gen AI model for DLSS 5, which will be released later this fall, and according to the company, it bridges the "divide between rendering and reality." As their blog continued, "DLSS 5 empowers game developers to deliver a new level of photoreal computer graphics previously only achieved in Hollywood visual effects." NVIDIA boss Jensen Huang went as far as to call DLSS 5 "the GPT moment for graphics." He continued, "Twenty-five years after NVIDIA invented the programmable shader, we are reinventing computer graphics once again. (DLSS 5 is) blending hand-crafted rendering with generative AI to deliver a dramatic leap in visual realism while preserving the control artists need for creative expression." NVIDIA explained that DLSS 5 works by analyzing a single frame to understand scene details - characters, lighting, materials - and then uses that understanding to generate images that accurately render tricky visual elements like skin, fabric, and hair while staying true to the original scene. The company also claimed that DLSS 5 is supported by publishers across the globe with the likes of Bethesda, CAPCOM, Hotta Studio, NetEase, NCSOFT, S-GAME, Tencent, Ubisoft, and Warner Bros. Games. When it comes to the actual DLSS 4.5 vs DLSS 5 footage, fans noticed a drastic difference between the actual faces of the characters as intended by the game studios, vs what DLSS 5 generated. To summarize its performance and result, gamers spent no time labeling the update as "AI Slop." Imagine you're playing as Kratos with DLSS 5 on, and next thing you know, the model waxed his beard and groomed him up into a well-dressed "Chris Hemsworth-style warrior", which he's rather not. Coming back to the gamers, one wrote, "Wow! This looks absolutely f*****g awful! Thank you, Nvidia!" Another chimed in, "I don't think people build a 3000 dollar pc to have the ability to have an AI Slop filter." NVIDIA DLSS 5 drops this fall and is going to be a free update for all users who use an NVIDIA RTX GPU and the NVIDIA app on their desktops. Will you be using DLSS 5 when it drops to make your game photorealistic? Let us know in the comments below!
[64]
Nvidia Unveils DLSS 5 Graphics Upscaler, Faces Backlash Over 'AI Slop Filter'
The technology has received severe backlash online from gamers Nvidia unveiled DLSS 5, the latest iteration of its Deep Learning Super Sampling graphics upscaling technology, on Monday, showcasing AI-powered "photoreal" visuals in games like Resident Evil Requiem and Starfield. DLSS 5 utilises a real-time neural rendering model to give a graphical facelift to characters and environments. The company, however, is facing intense backlash from gamers, who are calling the new technology an "AI slop filter." While Nvidia is claiming that DLSS 5 is the company's "most significant breakthrough in computer graphics since the debut of real-time ray tracing in 2018," gamers have shared strong reactions criticising the upscaling technology of interfering with the developer's original artistic vision. DLSS 5 Receives Backlash For Changing Visuals In a video presentation debuting DLSS 5, Nvidia showed the graphics technology transforming faces of characters in games like Resident Evil Requeim, Starfield, Hogwarts Legacy, and EA Sports FC. However, the graphical updates shown in the video resemble an Instagram or Snapchat beauty filter that changes the original look of the character. The most egregious use of DLSS 5 can be seen on Resident Evil Requiem's protagonist, Grace Ashcroft. The upscaling technology seems to be adding make-up, eye shadow, and fuller lips covered in lipstick, completely changing the character's look. DLSS 5 changes Grace Ashcroft's face with an effect that resembles a beauty filter Photo Credit: Nvidia In a separate domonstration, DLSS 5 is seen completely transforming characters' faces in Bethesda's sci-fi RPG, Starfield. In the video, with DLSS 5 on, the "upscaled" faces look radically different from their original versions. The effect is a disturbing departure from the original art style of the game, completely changing the look of Starfield's Creation Engine 2 graphics and superimposing a new style that can only be described as an AI filter that bumps up brightness, contrast, and saturation on characters' faces while adding an uncanny valley "realism" to them. The demonstration videos have sparked backlash from fans, who are now flooding social media with DLSS 5 "AI slop filter" memes. DLSS 5 is being labelled "deep learning super slop 5," "slop tracing," and "AI slop filter" by users on X. Nvidia, whose highly sought after AI chips have made it the most valuable company in the world, on its part, said that DLSS 5 was a breakthrough that would "reinvent" computer graphics. "DLSS 5 is the GPT moment for graphics -- blending hand-crafted rendering with generative AI to deliver a dramatic leap in visual realism while preserving the control artists need for creative expression," Nvidia CEO Jensen Huang said during the announcement. Nvidia, Bethesda Issue Clarification After Backlash The intense backlash, however, has forced Nvidia and Bethesda, one of the studios' whose games were featured in the DLSS 5 presentation, to issue statements. Nvidia said that DLSS 5 was "not a filter" and that developers would retain artistic control over DLSS 5 output. "Important to note with this technology advance (sic) - game developers have full, detailed artistic control over DLSS 5's effects to ensure they maintain their game's unique aesthetic. The SDK includes things like intensity, color grading and masking off places where the effect shouldn't be applied. It's not a filter - DLSS 5 inputs the game's color and motion vectors for each frame into the model, anchoring the output in the source 3D content," the company said in a pinned comment on its demonstration video. Bethesda director Todd Howard endorsed DLSS 5 and called the technology's effect on Starfield's graphics "amazing." However, Bethesda appeared to backpedal a bit following online backlash. "This is a very early look, and our art teams will be further adjusting the lighting and final effect to look the way we think works best for each game. This will all be under our artists' control, and totally optional for players," the company clarified on X. In addition to Bethesda, studios like Capcom, Hotta Studio, NetEase, NCSoft, S-Game, Tencent, Ubisoft, and Warner Bros. Games are confirmed to support DLSS 5 for their games. Nvidia DLSS 5 will debut in fall 2026.
[65]
"The underlying geometry is unchanged" - turns out DLSS 5 really is just a filter over 2D images, as Nvidia employee reveals: "Materials are inferred from the rendered frame"
"Also worth mentioning this is a very early preview of the tech" The DLSS 5 drama continues, as one employee from Nvidia has shared more insight into the technicalities of how the new iteration of the upscaler will work while you're playing games. As many have suspected since the initial reveal video, DLSS 5 isn't actually using your gaming PC's graphical power to enhance, upscale, or render frames differently. In fact, DLSS 5 only uses 2D imagery and motion vectors as an input... so pretty much exactly how an Instagram filter works. This news comes by way of YouTuber Daniel Owen, who released a video titled "Nvidia answers my DLSS 5 questions" in which he details an email exchange he's had this week with Jacob Freeman, a DLSS Evangelist at Nvidia. "DLSS 5 only takes the rendered frame and motion vectors as inputs," Freeman said in an email. "Materials are inferred from the rendered frame". Essentially, that means there isn't anything happening at a deeper engine, rendering level with DLSS 5's image enhancements. It might rely on an Nvidia graphics card to operate, but there's nothing happening while rendering the existing frames of your games with DLSS5 to make them run more smoothly. It's taking cues from the lighting, geometry, fabric, and face models of a video game scene, blending it through a generative AI model, and filtering out a new image based on what the model detects. This certainly clarifies a few things Nvidia has said this week in response to the DLSS 5 "AI slop" accusations, including the company's CEO Jensen Huang, who said "DLSS 5 fuses controllability of the geometry and textures and everything about the game with generative AI." When asked more specific questions, Freeman said that in fact, "The underlying geometry is unchanged. Also worth mentioning this is a very early preview of the tech." he said in his email to Owen. Owen also asked about how one Starfield character in the initial DLSS 5 trailer seemed to have been outright changed by DLSS 5, including hair being rendered where it wasn't in the original game image. "It's painting a 2D picture over the 2D output frame the game actually did... and you won't see the one the game actually made, you'll see the generative AI interpretation of it", Owen observed. Personally, my biggest concern with DLSS 5, like a lot of video game fans, has been that it seems as though Nvidia outright wants to change the art style and artistic intent of a lot of the games it's showcasing DLSS 5 with. When asked about this, Freeman said: "Developers will have detailed controls such as intensity and color grading. Artists can use these controls to adjust blending, contrast, saturation, and gamma, and determine where and how enhancements are applied to maintain the game's unique aesthetic." Owen, talking in response to these questions in his video, talks about the importance of what's not being said in these responses from Nvidia. Yes, Freeman can say that developers and artists can finetune certain aspects. Essentially, it sounds like developers will have the image adjustments you'd expect from Canva, Instagram, or other services that use image filters. But what he isn't saying here is that game developers will be able to "reprompt" DLSS 5's AI model to enhance things on their own artistic terms. Basically, the way DLSS 5's AI sees and wants to enhance an image seems to be an "on" or "off" job. "Developers can also mask specific objects or areas to be excluded from enhancement. We continue to talk to developers to understand all the ways they would like to control the technology. Ultimately, we see the NVIDIA DLSS 5 as a tool for them to achieve their artistic vision rather than be limited by the capabilities of traditional real-time rendering," he said. For more detail, the full video where Owen goes into depth about the questions he asked is linked below. For more on PC components, check out the best CPU for gaming, the best RAM for gaming, and the best SSD for gaming.
[66]
Epic Games lead producer calls gamers thinking DLSS 5 detracts from art direction 'insane'
TL;DR: NVIDIA's DLSS 5, unveiled at GTC 2026, faced criticism for altering character designs, but CEO Jensen Huang defended it as a developer-controlled generative AI technology enhancing geometry and textures. Epic Games' Jean Pierre Kellams praised its advanced lighting and shading improvements, highlighting its potential for technical artists. NVIDIA unveiled DLSS 5 at GTC 2026, and the reception from the new technology has been anything but pleasant, with large portions of the gaming community describing the tech as AI slop. The criticism of the latest generation of DLSS has even been recognized by NVIDIA CEO Jensen Huang, who said people are "completely wrong" about DLSS 5, particularly the aspect where some characters look completely different than their original design. Concerned gamers pointed to examples such as Resident Evil: Requiem, as the DLSS 5 On image showcased a stark change in the character model. Notably, Capcom developers recently said they found out the title was being used in the presentation at the same time as the public. NVIDIA's Jensen Huang responded to a question from Tom's Hardware, saying, "Well, first of all, they're completely wrong. The reason for that is because, as I have explained very carefully, DLSS 5 fuses controllability of the of geometry and textures and everything about the game with generative AI." Huang added, "All of that is in the control - direct control - of the game developer," he said. This is very different than generative AI; it's content-control generative AI. That's why we call it neural rendering." Similar sentiments have now been echoed by Jean Pierre Kellams, a tech lead producer from Epic Games, who said, "All you guys roasting DLSS 5 like it doesn't look better/is detracting from art direction are absolutely insane. The lighting and shading improvements are bonkers. If that was shown as a next-gen hardware reveal and not"AI"you guys would be going nuts like the Watch Dogs demo." "If I was a technical artist, I'd be begging for this right now. It's essentially making super high resolution physically accurate lighting trivially cheap. If this is the demo, I can't wait until tech artists start really digging in," added Kellams
[67]
Bethesda weighs in on negative DLSS 5 reaction: 'This will all be under our artists' control, and totally optional for players'
Nvidia has announced DLSS 5, which it calls the "future of real-time rendering", and an early demo has made it mostly look like an AI filter slapped over a handful of games. The results have not been massively positive as a result. Yet, Bethesda, which partnered with Nvidia to show it off in Starfield and Elder Scrolls 4: Oblivion Remastered seem all in on it as an upscaling option. In Nvidia's announcement, Todd Howard says, "With DLSS 5 the artistic style and detail shine through without being held back by the traditional limits of real-time rendering. We're excited to work with this new technology and look to bring DLSS 5 to Starfield and future Bethesda titles." After Digital Foundry put out its analysis of the tech, based on hands-on experience, Bethesda responded to the post over on X. It says, "This is a very early look, and our art teams will be further adjusting the lighting and final effect to look the way we think works best for each game. This will all be under our artists' control, and totally optional for players." That last point is a key element of the conversation being had. Nvidia said last night that developers have "artistic control" over how DLSS 5 is implemented, but it wasn't clear what level of control they have, other than just turning the DLSS 5 toggle to 'off'. It did later clarify "The SDK includes things like intensity, color grading and masking off places where the effect shouldn't be applied. It's not a filter - DLSS 5 inputs the game's color and motion vectors for each frame into the model, anchoring the output in the source 3D content." Bethesda's comments here suggest a level of thought put into its implementation, and further clarifies that gamers can simply turn it off. Replies to Bethesda haven't been massively supportive, with the post getting 2,300 likes and almost 800 comments at the time of writing. Generally, that's an abnormal like-to-reply ratio, with many of the top comments being overtly negative. Many believe that the use of AI to this degree is wrong, and this is partially tied to both quality and ethical concerns with AI itself. This isn't helped by the fact that the faces in almost all of Nvidia's highlight reels look uncanny and awfully Instagram-like. Resident Evil Requiem's Grace Ashcroft is perhaps hit worst by the yassification beam, but grunts in Starfield also look like a cheap filter has been placed on top. DLSS 5 isn't just a filter, though; it works on a hardware level and impacts lighting, too. For its downsides, lighting in the Starfield clips doesn't look nearly as bad as the human-ish figures. Nvidia's roll-out of this early demo has not managed to sell the tech very well to many developers, and it's no wonder why. Many are already not massive fans of AI, so showing clips that make games look potentially worse only makes the company profiting from AI (and its negative effects) look worse.
[68]
We Spoke To Game Devs And All Of Them Hate DLSS 5: 'What The F***, Nvidia?'
One game developer told Kotaku that after seeing DLSS 5, 'it feels like there is no future for me' in the video game industry On Monday, Nvidia revealed DLSS 5, the next version of its suite of upscaling and performance-boosting tech used mostly in PC games. In the past, people have celebrated DLSS. This time around, it seemed that just about everyone online hated the AI slop faces and radical visual changes DLSS 5 added to games like Starfield. After talking to a bunch of developers and reading other devs' comments online, it seems the people who make games also aren't on board for Nvidia's AI-enhanced future. "I think [DLSS 5] is the perfect example of the disconnect between what we as developers and gamers want and what the nasty freaks who are destroying the world and consolidating all wealth into the hands of the few using GPUs think we want," Cullen Dwyer, gameplay/tech design lead at Doinksoft, told Kotaku. "Presenting this technology under the DLSS name, thereby implying it will be the default and standard, is insulting and scary, and my immediate kneejerk response is 'Thank fucking god I make 2D games.' If I have to make a 3D game, I'm writing a software renderer, fuck NVIDIA, fuck these ghouls." Andi Santagata, a former AAA game dev and indie game maker, told Kotaku that, besides DLSS 5's troubling tendency for "yass-ifying" faces, he was worried that this tech would interfere with artistic intent. "Aside from the obvious aesthetic issues," said Santagata, "one of the other big problems is how DLSS 5 basically sucks the personality out of any artistic choice the devs have made by making average-out guesses of what it thinks things should look like. Like, you're never going to get the devs' actual intent with this thing turned on." Another developer, SolidPLasma, shared with me similar fears about the original artistic vision of a game being altered by DLSS 5. "It feels like a misguided attempt at realism," said SolidPlasma. "A style that I personally feel is a dead end. In attempting to make characters appear more human, it removes everything original about their designs, and more often than not, whitewashes them." A dev with over 15 years of experience working on AAA games who didn't want to share their name publicly told Kotaku that DLSS5 "feels like it is taking away some authorial intent from artists by making characters more glamorous and environments more detailed, with the overall look appearing to be less distinct or aesthetically cohesive than the original intent." This same point, that the AI-powered DLSS 5 ruins or alters artistic intention, was shared by Karla Ortiz on Bluesky shortly after Nvidia showed off the new tech. "This is so disrespectful to the intentional art direction of devs," said Ortiz. "If devs wanted to lean in to hyper realism, they would." "This also drastically changes key aspects of visuals like character features, focal points, lighting, and so on. What a terrible invention. Nvidia should shelve this one. Imagine being a dev team working for months/years to create characters whose carefully crafted features and body language tell specific stories, with the exact detail and lighting setups that perfectly fit the story and overall game world. For slop to shit all over that carefully balanced work." On YouTube, former Red Dead Redemption 2 developer Mike York reacted negatively while watching the DLSS 5 reveal video. His concerns were also about how drastically it changed the intent of the devs and artists who made the original game. "Whoa. Hold on. No, no, no, no, no, no, no, no, no, no, no," said York. "This isn't just some lighting, dude. What the fuck. I'm telling you, this is like a complete AI re-render. You're no longer looking at the game anymore. Does that make sense? This is scary." Following the reveal and all the of the online backlash from fans and devs, Nvidia CEO Jensen Huang spoke to the press during a Q&A and was asked to respond to fears that DLSS 5 will lead to video game visuals becoming homogenized. “Well, first of all, they’re completely wrong,†said Huang of the technology's critics. “The reason for that is because, as I have explained very carefully, DLSS 5 fuses controllability of the geometry and textures and everything about the game with generative AI." "All of that is in the control â€" direct control â€" of the game developer. This is very different than generative AI; it’s content-control generative AI. That’s why we call it neural rendering." And while it does seem to be true that devs will have a choice for when or when not to support or use DLSS 5, that doesn't change that the tech has spooked and grossed out a lot of folks who make games. Every dev I talked to, even those who didn't want to be included in this feature, all told me they hated DLSS 5 and were offended by Nvidia's announcement and how it seemingly overwrote the work of talented artists, modelers, and other game devs. Reportedly, even the devs involved with some of the games featured in Nvidia's DLSS 5 announcement aren't happy about the AI-powered tech. Insider Gaming reported on Wednesday that some developers at studios like Ubisoft and Capcom were caught off guard by Nvidia's big reveal. "We found out at the same time as the public,†one Ubisoft developer told the outlet. Bethesda was also quick to backtrack a bit despite Todd Howard being directly involved in the announcement of DLSS 5 on Monday. Something I encountered while listening to devs speak about DLSS 5 is that many of them aren't even sure who this is for. And as pointed out by other devs, Nvidia is fueling the AI datacenter craze that is leading to PC parts getting more expensive and hard to buy, which in turn means far fewer people will even be able to enjoy DLSS 5 when it rolls out. "The average gamer is unable to afford the hardware that will make DLSS 5 a reasonable offering," said Dwyer. "In pursuing 'photorealism' or whatever they think this slop abomination is, they have created an ecosystem where the most economically viable game is one that can run on low-end hardware. Low poly indie PS1 horror games, stay winning." "What the fuck, Nvidia? No. Nobody wants this," posted Dusk and Iron Lung creator David Szymanski on BlueSky. One dev who wished to remain anonymous told Kotaku, "People debate if video games are 'art,' but I prefer to see video games as galleries. Every game has many different forms of art within. Animation and sound design are separate yet complementary pieces of art. Although separate, every piece works together to create a full, cohesive product. [DLSS 5] deconstructs the gallery. It breaks away one piece of the full experience and returns a game into segmented elements. It won't ruin a game, but it does destroy its purest expression, and to that, all I have to ask is, what is the point?" Nuclear Throne, Ridiculous Fishing, and Australia Did It developer Rami Ismail told Kotaku that while previous versions of DLSS were "perhaps a bit misguided," they at least fulfilled a need. But he doesn't think DLSS 5 is something "anyone was waiting for." "One of my big dreams," said Ismail, "when I first became an independent game developer almost 2 decades ago is to have a megacorporation smear the most dystopian slop all over what is generally two to three years of my life's work while shushing into my ear that I'm in full artistic control." Perhaps the most upsetting story I heard was not from a published game developer, but instead from a young person who is currently studying game development at a college in the United States. They wished to remain anonymous as well, but told me that every day on campus is filled with people going on and on about AI. The DLSS 5 news hit them hard. "When I saw the DLSS 5 news and screenshots, I felt ill," said the game dev student. "I have become used to a nonstop wave of horrible news so I wasn't particularly shocked, but it was more like another punch in a long and drawn-out beating with no end in sight. News about AI hits particularly hard as it feels like a fundamental irreversible erosion not just of the industry, or my passion for game development, but of the human condition itself." "It's not an over exaggeration to say what DLSS 5 represents has taken a serious toll on my mental health. Most tech feels like the pursuit of knowledge by passionate and obsessive individuals...This feels like an artless desecration of the medium itself by a company with a stranglehold on the global economy," said the student.Â
[69]
What NVIDIA Actually Got Wrong with the DLSS 5 Reveal
Nobody, except for a very select few who had already checked out a demo, was expecting NVIDIA CEO Jensen Huang to unveil DLSS 5 during his GTC 2026 keynote last week. NVIDIA had already countered AMD's FSR Redstone at CES 2026, where it announced and released a second-generation transformer model for its Super Resolution upscaler, which adopted the DLSS 4.5 label and was quickly judged the best upscaler available. Two months later, they stunned the world - for better or worse - with a new version of DLSS that did not focus on improving frame rates or ray tracing performance, but instead sought to deliver photorealistic lighting and shading with the power of AI. Huang, now one of the richest men in the world after NVIDIA's swift climb to the top of the world's most valuable publicly traded companies powered by AI demand, said that "just as GeForce brought AI to the world, AI is now going to go back and revolutionize how computer graphics is done altogether", calling DLSS 5 the future and next-generation of graphics technology. However, the gaming community's reaction was very mixed. While the first tech journalists who checked out the demo were largely impressed, the DLSS 5 reveal was criticized by many gamers, as well as some developers and modders, some of whom we've recently interviewed on Wccftech. Personally, I disagree with most of the criticism levied against the technology. However, in hindsight, it seems clear to me that NVIDIA made a few critical mistakes when it comes to the reveal itself, ultimately influencing the public's first impression of this new version of DLSS. It's incredible how one screenshot has singlehandedly hijacked the whole DLSS 5 conversation. Yes, I'm referring to the Resident Evil Requiem comparison showing co-protagonist Grace Ashcroft from a scene at the very beginning of the game. NVIDIA chose that one as the key image to showcase the massive visual leap the new technology delivers. It was even picked as the featured image for the official announcement blog post. And demonstrate it did, but not in a good way. While it may have impressed non-gamers at GTC, it was immediately singled out by gamers as proof that NVIDIA was just adding an AI-powered "beauty filter" to games that would, however, ruin the original game's artistic and narrative intent. They even went on to elaborate that Grace wouldn't really put any makeup on in this case, because she is on her way to investigate the circumstances of her mother's death, hardly an appropriate occasion to dress up. NVIDIA failed to fully grasp the strong emotional attachment that millions of gamers who had just finished and loved Resident Evil Requiem had formed with the character's existing look. What's worse is that, if you actually browse the entire gallery of official DLSS 5 screenshots, there is another comparison dedicated to Grace that shows a much closer rendition of the character. Whereas that infamous thumbnail was rightfully criticized by all, this other, far less discussed comparison demonstrates how the technology can largely preserve a game character's original facial structure while still greatly improving its lighting and shading. The image choice suggests that someone at NVIDIA wanted to maximize the wow factor with that specific thumbnail choice, and in doing so, essentially condemned DLSS 5 to a fierce backlash. As I noted earlier, nobody was even thinking about a new DLSS, yet NVIDIA was all too eager to tell the whole world about it despite being far from ready. They showed it off over half a year before the planned Fall 2026 launch, at a time when it still required two GeForce RTX 5090 graphics cards to run. That never happened before with any prior DLSS reveal, and it's a clear tell that the reveal came far too early in the tech's development cycle. This ties into the previous mistake: Grace's first comparison shows that NVIDIA still has tuning left to do to ensure the DLSS 5 model is far more conservative when handling faces. They could have either waited until a later date, or picked those screenshots - and there are many, including renderings of Liverpool FC captain Virgil van Dijk - where the model already keeps close to the original while delivering a series of substantial lighting improvements. They did neither, and they suffered for it. Someone perhaps wanted a cool new technology to show off at GTC, but the price of this rushed reveal turned out to be high. It would have been wiser to wait until DLSS 5 was more stable, closer to release, and capable of running on a single GeForce RTX GPU. All that being said, I do believe much of the hate is unwarranted. First of all, neural rendering is not an option: it's the only remaining venue for improvement. Moore's Law is dead, and the remaining silicon-derived improvements won't be enough on their own to achieve true photorealism in real-time rendering. Many people say that games look nearly like CGI films nowadays, but that really isn't true. Real-time rendering still lags far behind in both lighting and shading, and that applies to path traced games as well. Neural rendering techniques are the only way to bridge a gap that could otherwise persist for years, if not decades. And it's not like NVIDIA didn't tell us they were going in this direction; a couple of years ago, VP of Applied Deep Learning Research and DLSS father Bryan Catanzaro famously said: I do think that let's say DLSS 10 in the far future is going to be a completely neural rendering system that interfaces with a game engine in different ways, and because of that, it's going to be more immersive and more beautiful. DLSS 5 doesn't integrate with a game engine yet (which was one of the side criticisms), but that certainly sounds like the next step for the technology and would ensure better harmony with a game's original style. At CES 2026, Huang had once again teased that the future of graphics would be powered by neural rendering. Now, it's official, albeit earlier than expected, and perhaps understandably still a bit rough. This is far from the first NVIDIA DLSS controversy. Ever since NVIDIA introduced the first version of its Deep Learning Super Sampling suite, it has been the target of criticism. With DLSS 1.0, those not fond of NVIDIA cried about the "fake pixels" (DLSS renders the game at a lower internal resolution, depending on the user-chosen quality mode, and reconstructs the rest with AI). By the time DLSS 2.0 shipped with a significant improvement in image quality, it was clear that fake pixels were necessary, and AMD developed its own FidelityFX Super Resolution, while Intel worked on its Xe Super Sampling. Then, with DLSS 3.0, NVIDIA went one step further and introduced Frame Generation, also known disparagingly as "fake frames". Again, there was strong outrage from purists, who could not bear the thought of having to deal with fully AI-generated frames between actually rendered ones. Guess what? AMD and Intel followed suit here, too, eventually introducing their own machine learning powered versions. Even Sony has recently admitted it will add an ML frame generation to its PlayStation consoles in the future. As a pioneer, NVIDIA has shown time and again the way forward for rendering in the industry, even if it meant exposing the first iteration of a new technology to widespread criticism. DLSS 5 is no different; I have little doubt that AMD and Intel will take the same path in due time. Yes, NVIDIA made some glaring mistakes with this reveal, which I have outlined above. Yes, they need to improve the model before its full release, and possibly expand the number of tuning knobs available to developers and look into tighter integration with game engines. I would even recommend delaying the actual release of the technology beyond Fall, if at all necessary. I doubt there's any risk of anyone getting a similar neural rendering tech out before NVIDIA, and it's important that the next time DLSS 5 is shown, most of the rational critiques (not the ideological, preconceived ones against any form of AI) are addressed properly. As pointed out by tech journalist Ryan Shrout, one of the first to check out the demo in-person, forget about the faces for a minute: DLSS 5 lighting provides noticeable, next-generation-like improvements to 3D scenes, making objects, environments, and even things like water and foliage look a lot more life-like. That's why throwing the entire thing into the bin, as the most extreme knee-jerk social media reactions would have NVIDIA do, just because the model still needs to be tuned to ensure faces always retain their original look, would be incredibly naive and short-sighted.
[70]
Plenty of developers are very negative towards Nvidia's DLSS 5
Not everyone was impressed when Nvidia showcased its DLSS 5 technology this week. The idea was that it would make faces more realistic using AI, but in most cases the result seemed to be more generic, reminiscent of a combination of AI-generated art and Snapchat filters - and worst of all, the result often doesn't even look like the original, as in the example below from Indiana Jones and the Great Circle. As we reported the other day, countless game companies took the opportunity to poke fun at DLSS 5 by posting comparisons with and without the technology. Kotaku wanted to know more about what developers themselves think about AI changing their work and interviewed a few on the subject. Many chose to remain anonymous, but Cullen Dwyer, gameplay/tech design lead at Doinksoft, didn't mince words, saying: "I think [DLSS 5] is the perfect example of the disconnect between what we as developers and gamers want and what the nasty freaks who are destroying the world and consolidating all wealth into the hands of the few using GPUs think we want. Presenting this technology under the DLSS name, thereby implying it will be the default and standard, is insulting and scary, and my immediate knee-jerk response is 'Thank fucking god I make 2D games.' If I have to make a 3D game, I'm writing a software renderer, fuck NVIDIA, fuck these ghouls." Another developer described as a veteran with over 15 years of experience in AAA development said that it "feels like it is taking away some authorial intent from artists by making characters more glamorous and environments more detailed, with the overall look appearing to be less distinct or aesthetically cohesive than the original intent." Yet another person who has worked on AAA games but now focuses on indies, Andi Santagata, says that the personality and intent of the design completely disappear: "Aside from the obvious aesthetic issues, one of the other big problems is how DLSS 5 basically sucks the personality out of any artistic choice the devs have made by making average-out guesses of what it thinks things should look like. Like, you're never going to get the devs' actual intent with this thing turned on." None of the developers Kotaku spoke with about the matter are positive, and many are, quite frankly, really negative, bordering on hostile. Developer Karla Ortiz has also weighed in on the matter via Bluesky, writing, among other things, that "Nvidia should shelve this one" and arguing that it is incredibly disrespectful because meticulous work to get everything perfect is undone when AI instead provides a completely different interpretation. She continues in a later post: "Imagine being a dev team working for months/years to create characters whose carefully crafted features and body language tell specific stories, with the exact details and lighting setups that perfectly fit the story and the overall game world. Nvidia has so far stood its ground and maintains that the critics are wrong, and we'll have to wait and see what the future holds for DLSS 5 once the dust settles. We also have an article on the subject that you can read here.
[71]
NVIDIA DLSS 5 Backlash Grows over AI Lighting Changes in Games
NVIDIA's DLSS 5 has become a lightning rod for debate within the gaming community, as highlighted by RGT 85 in their recent analysis. Unlike earlier versions of Deep Learning Super Sampling, which were celebrated for improving performance without compromising artistic vision, DLSS 5 introduces AI-driven enhancements that significantly alter visual elements like lighting and shadows. While these features aim to create more immersive experiences, they have sparked criticism from developers and players alike for distorting the intended atmosphere of games. For instance, some developers overview that the technology's automated adjustments clash with their creative intent, raising broader concerns about the role of AI in shaping game design. Below discover insight into the key points fueling the DLSS 5 controversy, including how it impacts artistic integrity and why it has divided opinions among industry leaders. Explore the challenges developers face when balancing performance improvements with creative control and understand how this debate ties into larger questions about AI's role in gaming. By examining the reactions from both creators and players, this breakdown provides a clear view of the ethical and technical dilemmas surrounding DLSS 5's rollout. DLSS, or Deep Learning Super Sampling, is a technology that uses artificial intelligence to upscale lower-resolution images into higher-quality visuals. By doing so, it allows games to achieve higher frame rates while maintaining image clarity, making it particularly beneficial for gamers using mid-range or older hardware. Earlier iterations of DLSS, such as those integrated into devices like the Nintendo Switch 2, were praised for their ability to balance performance with visual fidelity. These versions focused primarily on resolution enhancement, allowing smoother gameplay without compromising the artistic vision of developers. However, DLSS 5 has introduced a new set of features that extend beyond simple upscaling, fundamentally altering how games are rendered. DLSS 5 introduces advanced AI-driven filters designed to enhance lighting, shadows and other visual effects. While these features aim to create a more immersive gaming experience, they have sparked criticism for altering the original artistic vision of games. For example: This has led to a broader debate about the appropriate use of AI in gaming. Should such enhancements be universally applied, or should they be reserved for specific scenarios, such as remastering older games? Many believe that DLSS 5's approach undermines the authenticity of modern games, raising concerns about the balance between technological innovation and creative integrity. Uncover more insights about NVIDIA in previous articles we have written. The reception to DLSS 5 has been deeply polarized. Some developers and publishers, such as Bethesda, have embraced the technology for its performance benefits, citing smoother gameplay and improved frame rates. However, others remain skeptical, questioning the unintended consequences of its visual modifications. The controversy has also extended to gaming media. Digital Foundry, a respected authority on gaming technology, initially praised DLSS 5 for its technical achievements. However, following public backlash, the outlet released a follow-up video addressing concerns about the technology's impact on game visuals. This incident underscored potential missteps in DLSS 5's rollout and highlighted the need for greater transparency in how such technologies are introduced to the market. The debate surrounding DLSS 5 is part of a larger conversation about the role of AI in creative industries. As AI tools become increasingly sophisticated, they raise critical questions about their impact on artistic expression and industry practices. Key concerns include: One significant issue is the potential for DLSS 5 to exacerbate platform disparities. Its advanced features may give PC gamers access to enhanced visuals and performance, while console players are left with less optimized versions. This could widen the gap between platforms, fueling tensions within the gaming community and raising questions about fairness in the industry. The rollout of DLSS 5 has also highlighted concerns about transparency and corporate accountability. Critics argue that NVIDIA and its partners have not been fully forthcoming about the potential downsides of the technology. Some have accused gaming media outlets of downplaying these issues to maintain favorable relationships with NVIDIA, further complicating the narrative around DLSS 5. These concerns point to a broader need for accountability in how new technologies are marketed and implemented. Gamers and developers alike are calling for clearer communication about the trade-offs involved in adopting AI-driven tools like DLSS 5. Greater transparency could help build trust and ensure that such technologies are used responsibly. The controversy surrounding DLSS 5 underscores the challenges of integrating AI into creative industries. As AI continues to evolve, developers and publishers will need to carefully navigate the tension between technological innovation and artistic authenticity. Key takeaways from the DLSS 5 debate include: Whether DLSS 5 will ultimately be regarded as a significant advancement or a cautionary tale remains uncertain. For now, it serves as a focal point in the broader discussion about the future of AI in video game development. As the industry continues to grapple with these issues, one thing is clear: balancing technological progress with creative integrity will remain a critical challenge for years to come. Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.
[72]
'I thought this video was an April Fool's joke, but it's still March': Nvidia reveals DLSS 5 to supercharge graphics with AI -- and the hate pours forth
The internet has a new game: invent your own new acronym for DLSS featuring the word 'slop' * Nvidia has announced DLSS 5 at GTC 2026 * This is a "real-time neural rendering model" (AI) to revamp lighting and improve graphics in PC games * The reaction has been broadly negative across social media, with plenty of concerns about the direction Nvidia is now heading in Nvidia has revealed DLSS 5 at GTC 2026, and is calling the next-gen tech the "most significant breakthrough" for computer graphics since real-time ray tracing. Nvidia announced in a press release that DLSS 5 brings in a "real-time neural rendering model that infuses pixels with photoreal lighting and materials", comparing the end result to Hollywood visual effects. So, this is essentially about taking a game's graphics and sprucing them up with AI to improve the lighting and overall look to be more realistic. This is not about frame rate boosting or upscaling (as with DLSS 4.5), but polishing up the visuals to be photorealistic -- the same game assets are used, we're told, just with very different AI-powered lighting. The best way to get a handle on what DLSS 5 actually does, of course, is to look at some of the early images Nvidia has shared showing the 'before and after' -- check out the above pic surfaced by our own Lance Ulanoff on X (from GTC), or the below example from Resident Evil Requiem shared by Nvidia (accompanying its DLSS 5 press release). Jensen Huang, CEO of Nvidia, commented that: "DLSS 5 is the GPT moment for graphics - blending handcrafted rendering with generative AI to deliver a dramatic leap in visual realism while preserving the control artists need for creative expression." DLSS 5 is set to launch later this year, in "the fall" -- so perhaps as early as September -- and it'll be for RTX 5000 graphics cards only as you might expect. To say there's been a groundswell of negative reaction to DLSS 5 would be an understatement -- on Reddit and Bluesky in particular -- so let's dive into why that's the case. Analysis: AI slop accusations Now, DLSS 5 does look like powerful tech, and Digital Foundry had a hands-on with the feature in a bunch of games over at GTC, coming away impressed. And indeed if you watch that YouTube video, some of the footage does look rather smart. I'd highlight Oblivion Remastered, where the lighting breathes fresh life into the stone walls and buildings -- though not everyone agrees on that. The problem comes with preserving artistic intent here. Huang specifically mentions that this might be AI overhauling a game's graphics, but that Nvidia intends to preserve the "control artists need for creative expression" -- and remember, the game assets aren't being altered here, just the lighting, Team Green assures us. Still, the Resident Evil Requiem screenshot in particular is causing a lot of controversy, most obviously because it's changing Grace's looks radically in terms of adding lipstick for example (and altering her hair color markedly). It ends up with a whole different -- and unwanted -- vibe for many. The overall look of game characters given a DLSS 5 makeover feels rather unreal, too, in an uncanny valley way. Yes, everything's a lot sharper and more like a photo, but that isn't always good if it looks overbearing in that respect, or it messes with the ambience and atmosphere of the original visuals. This holds true for background elements as well as foreground characters, and there's plenty of hate for both on Reddit. As one Redditor commented: "Surely this will result in a look that the artist/developer didn't intend? It's like putting an ugly AI filter over the artist's work. This seems dumb as hell to me." I also worry about the lighting looking too intense and overblown, and colors too saturated -- a bit like when you take a photo on your phone and stick a filter over it to jazz things up, and it's just too much. Clearly, this has stirred up a hornet's nest of reaction, with some of the most common refrains being that 'we don't want an AI slop filter'. Gamers are worried that this points in a dangerous direction for the future of games -- one where developers don't have as much control over the art direction of their products. There's another concern which hasn't been as widely picked up, too, namely that the tech demo for DLSS 5 is actually running on two RTX 5090 graphics cards, as per Nvidia's FAQ. Yes, a single RTX 5090 is not enough to cope with the overhauled lighting effects here -- Nvidia needed to use a pair of them, with one of the GPUs dedicated to running DLSS 5 (and the other actually rendering the game). That clearly suggests that whatever DLSS 5 is doing behind the scenes is seriously intensive work. Of course, this is still early days, and DLSS 5 is still in 'early preview' -- when it's finished, the tech will be optimized to run on a single GPU (so Team Green isn't ushering in a return to SLI setups). Similarly, there will be a lot of fine-tuning and other honing done to DLSS 5 in terms of the image produced, too, so we need to wait before passing a final judgment here. This is unlikely to dissuade AI skeptics, mind you, who have very much made up their minds already. Time will tell, but meanwhile, I expect heavily liked comments such as "your RAM died for this" (a comment from @canestrini808 on Digital Foundry's YouTube video) or "I thought this video was an April Fool's joke, but it's still March" (from @lukas0999) will continue to hold sway. We've reached out to Nvidia to see if the company had any comment on the negative reactions flying around, and will update this article if we hear back. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course, you can also follow TechRadar on YouTube and TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
[73]
NVIDIA introduces DLSS 5 with 4K real-time AI graphics and developer controls
NVIDIA has unveiled DLSS 5, an AI-based graphics technology that delivers real-time photoreal lighting and material effects in games, achieving visual fidelity previously possible only in cinematic VFX. Since GeForce 3 (2001), NVIDIA has continuously increased compute power to support realistic game environments, from programmable shaders and CUDA® to real-time ray tracing on RTX 2080 Ti (2018) and path tracing with neural shaders on RTX 5090 (2025). Despite this growth, a 16-millisecond game frame has far less compute than a Hollywood VFX shot, which can take minutes or hours to render. DLSS, introduced in 2018, uses AI to upscale resolution and generate frames, and is integrated into over 750 games. DLSS 4.5, released earlier in 2026, generated 23 of 24 pixels with AI. DLSS 5 shifts focus from performance to photoreal visual fidelity in real time. DLSS 5 uses a game's color and motion vectors for each frame as input. Its AI model applies photoreal lighting and material effects that remain consistent with the original 3D scene, operating deterministically in real time. The model interprets complex scene elements -- characters, hair, skin, fabric -- and environmental lighting such as front-lit, back-lit, or overcast conditions. It reproduces subsurface skin scattering, fabric sheen, and light-material interactions on hair while maintaining scene structure and semantic accuracy. Developers can control intensity, color grading, and masking to specify where enhancements apply. DLSS 5 integrates via NVIDIA Streamline, alongside existing DLSS and NVIDIA Reflex, and can run at resolutions up to 4K. DLSS 5 will be supported by developers and publishers including Bethesda, CAPCOM, Hotta Studio, NetEase, NCSOFT, S-GAME, Tencent, Ubisoft, and Warner Bros. Games. Games confirmed for DLSS 5 support include: AION 2, Assassin's Creed Shadows, Black State, CINDER CITY, Delta Force, Hogwarts Legacy, Justice, NARAKA: BLADEPOINT, NTE: Neverness to Everness, Phantom Blade Zero, Resident Evil Requiem, Sea of Remnants, Starfield, The Elder Scrolls IV: Oblivion Remastered, Where Winds Meet, and others. DLSS 5 is scheduled for release in fall 2026, allowing developers to integrate AI-driven photoreal graphics into supported games. Speaking on the announcement, Jensen Huang, founder and CEO of NVIDIA, said:
[74]
The DLSS 5 controversy explained: why does it feel so wrong?
NVIDIA's latest AI tech promises better visuals, but it could be doing more than enhancing art. I've tried to give DLSS 5 the benefit of the doubt, because on paper, it offers better performance, cleaner images, improved immersion at little cost (aside from needing NVIDIA's most expensive GPUs), and that's been the pitch for every version of DLSS so far, and broadly, it's always worked to benefit gamers and developers. But the more I've looked at the early DLSS 5 demos from GTC and how it's currently being presented, the harder it is to ignore a more basic problem: it's enhancing, yes, but it can also start to reinterpret the 'human' art. The changes feel disconnected from the tone of the scene, the character, and even the narrative context. In environments, DLSS 5's use feels right, such as better lighting, more texture, improved depth and realism, but with characters, it feels wrong. Even major developers have been shocked by NVIDIA's announcement. I see it in the faces first, because that's where this stuff always shows up, such as the demo of Resident Evil Requiem's Grace Ashcroft (above), who is meant to look tired, worn down, and human, and who comes out the other side looking noticeably more polished. Her skin is visibly smoother than it should be, her features subtly more defined, and overall, there's a magazine shoot style to it, rather than a moment in a story, a moment in which Grace returns to the scene of her mother's murder, no less. To put this in context, DLSS is trained on high-quality reference frames and game-engine outputs. It's designed to reconstruct what should be there, not restyle it. So why does this feel so wrong in the Capcom demo? After all, this Grace demo was the moment in the demos that stuck with me, and many others, because the changes are at odds with the scene's intent: a character who's clearly in the middle of something grim, who's meant to look and feel distressed, is then reconstructed as with pristine, almost glamorous look, the kind of stylised aesthetic we've all started to associate with AI-generated portraits. This is the problem with DLSS 5, as it has been shown, because that look isn't coming solely from the developer's assets but from how the model reconstructs them. It would appear that DLSS 5 isn't rebuilding pixels from the game but instead leaning on patterns it has been trained to recognise, and those patterns are optimised for general image clarity and stability. The result is that familiar uncanny 'AI' aesthetic where everything veers toward the same idea of clean, attractive, well-lit imagery, regardless of whether it fits. It's hard to unsee, and once you clock it, it raises a bigger question about authorship and artistic intent, because if the final image is being shaped by data that exists outside the game's art direction, then whose vision are you actually looking at? The developer's or the AI model's approximation of what it thinks looks right? There's always been a tension with these kinds of techniques, but DLSS 5 pushes it further. It's not just stabilising or smoothing, it's using AI-driven reconstruction that can influence the final image in ways that feel like artistic decisions. Which means there's a deception to it that feels uneasy. And this is key, unlike some in this debate, I'm not staunchly against AI - I've seen how it can be used creatively, for example, the work of Kavan Cardoza - but there needs to be intent, openess and human creativity at the heart of its use. NVIDIA, quite reasonably, says that developers can tune this, dial it back, and control its behavior, and I don't doubt that's true, to a point, but the demos don't feel like isolated mistakes; they feel like a natural outcome of how the technology prioritises clarity. It's doing what it's designed to do, produce an image that looks 'better' according to its training and optimisation goals. The issue is that 'better' doesn't always mean 'correct' and, in fact, when it comes to art direction, it often means the opposite. A character's face isn't just geometry and texture detail, but rather it displays storytelling and context. A character's design reflects where they've been, what they've gone through, and the tone of the world they exist in, and if a model comes in at the final step and subtly reinterprets that into something more generically appealing, then it's diluting artistic intent. That's what makes DLSS 5 feel different from previous upscaling technology, as this isn't like adding motion blur or smoothing textures, where the effect is clearly part of the original design. This is a layer that sits on top, reshaping the result based on learned reconstruction rather than purely the game's authored pixels and styles. If this becomes standard, if this is just how images are reconstructed going forward, then a layer of every game's visual identity is effectively being filtered through shared reconstruction biases inherent to the model, and that means it's not the developer's choices, not the art team's intent, but the model's idea of what a good image looks like. To be fair, there are still upsides to where DLSS 5 is going. The performance gains matter, especially as games continue to push hardware harder and costs rise, and in scenes where the reconstruction aligns well with the source material, the results can look genuinely impressive, as we saw in the Assassin's Creed Shadows environment demo. But those wins don't cancel out the underlying issue: it doesn't just introduce artefacts or noise, it introduces the wrong idea and overrides the art direction a developer has curated, resulting in the wrong face or a mismatched tone. It's why the concern around DLSS 5 isn't just knee-jerk anxiety about AI creeping into games and taking human jobs (though there's a little of that) but a recognition that something fundamental is shifting, from rendering what was made to reconstructing what the system predicts should have been there, according to a system that wasn't part of the original creative process. I don't think DLSS 5 is going away, and, in fact, I can see some developers designing games with the technology in mind, so those mismatches in tone and art direction aren't an issue. It's easy to imagine companies like Sony and Microsoft exploring similar approaches on it for next-gen consoles as well, in some form or other. I can also imagine NVIDIA creating deeper control options to ease anxiety and give game artists more input. If the control issue can be solved, it may not be as bad as we all think, but right now, that feels like a big 'if'. Visit the NVIDIA Blog for a full breakdown of the DLSS 5 tech, and make up your own mind.
[75]
Will you be turning Nvidia DLSS 5 off?
It's safe to say plenty of PC players are already planning to turn Nvidia DLSS 5 off when the feature graces the best graphics card contenders. While the next-generation suite of GPU tools isn't scheduled to land until Fall 2026, it's already being labelled as "AI slop" by critics online and raising questions regarding developer creative control. In case you missed it, DLSS 5 is a new version of Nvidia's Deep Learning Super Sampling feature that has until now provided perks like upscaling, Frame Generation, and Path Tracing enhancements like Ray reconstruction. The tool has always had its critics, and graphics cards like the GeForce RTX 5080 are often accused of relying on "fake frames" since they can boost fps using multi-Frame Generation rather than traditional rasterization. The issue with DLSS 5 is that, according to Nvidia CEO Jensen Huang, it uses generative AI techniques to alter elements like character faces, and while the aim is supposedly photorealism, early demos suggest it's actually drastically changing original visuals. I've already expressed my deep concerns with this, and I will continue to do so, but I figured I'd give you, lovely GamesRadar+ readers, a chance to say whether you'll be switching DLSS 5 off or keeping it on in compatible games. When the time comes, there's a chance some players won't even realize they have DLSS 5 switched on. If you use the Nvidia app or let some games pick the best settings based on your build, you might find that specific tools are switched on by default. This makes sense in some scenarios since Super Sampling and Frame Generation can provide an fps boost, effectively providing a smoother experience, but the new generative AI tools may end up changing game elements if they're automatically enabled. Of course, just like with every other version, DLSS 5 should be an optional setting. Ideally, developers will add separate toggles for the new photorealism elements, meaning that if your PC relies on AI upscaling to run things at reasonable frame rates, you won't have to sacrifice those tricks. If you go into the Nvidia App's "graphics" tab, you can also force specific selections at a driver level, which lets you override specific models and disable things like Frame Generation. I'm hoping the Neural Rendering Model or anything tied to the apparent photorealistic gen AI parts of DLSS 5 also get their own drop-down, as this will let players nuke elements as they see fit. All this is based on the assumption that you'll be using a compatible Nvidia graphics card when DLSS 5 actually lands. AMD hasn't got a direct competitor to the monstrous RTX 5090 or even the RTX 5080 yet, but it does have a solid selection of mid-range and entry-level graphics cards, like the Radeon RX 9700. If it's an escape from AI you're looking for, you won't exactly get that with an RDNA 4 card, since they do boast Fluid Motion Frames and FSR upscaling tied to the tech, but the approach is admittedly more subtle since the GPUs stick to standard Frame Generation rather than Multi. I would also be surprised if DLSS 5 even makes it to RTX 50-series GPUs as promised. The recent demo currently uses two RTX 5090 GPUs, and while that signifies it's still in development, it also makes me wonder if this was supposed to be an RTX-60 series perk. I'm not going to speculate on why Nvidia has chosen now to unleash its gen AI aspirations onto unsuspecting PC players, but all I'll say is that we've got quite a few months before you'll have to check how to turn DLSS 5 off. * Graphics cards at Amazon * Desktop PCs at Amazon Building a new rig? Swing by the best CPU for gaming and the best gaming RAM for vital components. Alternatively, check out the best gaming handhelds if you'd rather escape from your desk.
[76]
Jensen Huang reiterates DLSS 5 gives developers full artistic control, says gamers are 'completely wrong' about the tech
Since the debut of NVIDIA's "25-year graphics breakthrough," the response has been resoundingly negative. Gamers are complaining that the technology is adding an AI beauty filter to their favorite characters, ruining the game's original art style and calling it "AI Slop." Jensen Huang, founder and CEO of NVIDIA, has come forward to address these remarks, immediately stating, "They're completely wrong." "The reason for that is because, as I have explained very carefully, DLSS 5 fuses controllability of the geometry and textures and everything about the game with generative AI," Huang said in response to a question from Tom's Hardware's Paul Alcorn. NVIDIA had previously suggested that developers would be able to tune the overall intensity of how much DLSS 5 changes. Huang reiterated once again that developers are in full, direct control of the technology. He also added that DLSS 5 is different from generative AI, calling it "content control generative AI" that adds generative capability to the game's existing geometry and "doesn't change the artistic control." "It's not post-processing, it's not post-processing at the frame level, it's generative control at the geometry level", Huang said. As to what this generative control would look like in practice, Jensen has previously said DLSS 5 is the "ChatGPT moment" of upscaling. This would mean that developers will give the input prompt, which would be structured data, such as geometry, motion vectors like character movements, and the depth of the scene, and the model will churn out photorealistic textures. There is no doubt that DLSS 5 will make strides in rendering, but the demos available tell a different story. We'll have to see how the technology performs at launch and how much control developers will actually have over their art styles. DLSS 5 is set to launch sometime in fall 2026, meaning we should be able to see some more of it in action before making a final judgment.
[77]
'Bad ending: now every game is slop': Game developers share mixed reactions to DLSS 5
After teasing it as "the future of real-time rendering", Nvidia has finally announced DLSS 5, an AI-led model for photorealistic visuals...that mostly looks like an AI filter slapped over Resident Evil Requiem. However, it's game developers, with their honed art direction and style, who will care the most about DLSS 5, and that reaction has been mostly mixed to put it lightly. Over on X, New Blood co-founder Dave Oshry shares a meme calling the tech "Pure Slopium", a reference to AI slop. 'Is this a 3D model?', an account dedicated to teaching X users about 3D models, similarly, labels DLSS 5 a "slop filter" and calls Nvidia "an absolute joke". The word slop is used a lot in regard to Nvidia's announcement. Indie developer Guselect says, "bad ending: now every game is AI slop." Over on BlueSky, Karla Ortiz, a Puerto Rican artist who worked for Ubisoft, Blizzard, Marvel and more, says: "This is so disrespectful to the intentional art direction of devs. If devs wanted to lean in to hyper realism they would," she says. "This also drastically changes key aspects of visuals like character features, focal points, lighting and so on. What a terrible invention. Nvidia should shelve this one." This is so disrespectful to the intentional art direction of devs. If devs wanted to lean in to hyper realism they would. This also drastically changes key aspects of visuals like character features, focal points, lighting and so on. What a terrible invention. Nvidia should shelve this one 😠-- @kortizart.bsky.social ( @kortizart.bsky.social.bsky.social) 2026-03-17T13:46:17.934Z Some fears about the tech are more existential than simply perceiving something as looking bad. Jon Ingold, narrative director and co-founder of Inkle, shares a screenshot of its game Heaven's Vault, arguing that DLSS 5 would remove its main character. Heaven's Vault is a game largely about archaeology, history, and a desire to preserve culture, so this post implies DLSS 5's AI beauty filter style would actually erase the identity of characters that don't fit certain beauty norms. This same fear is expressed by illustrator Corey Brickley, who says: "What if we blended every famous woman into one woman and then that's every woman you see now." Chris Gardiner, narrative director at FailBetter Games, calls this AI beauty standard the "Scarlett Johansesonification of videogames." Catchy. Kansai-based game dev Alwei critiques what the model does to lighting, plus expressions, and art direction, stating: "The artwork created by artists has a solid intention behind it, and if that can't be controlled, it has no meaning." Sam Barlow, known for his work on Immortality, Her Story, and Telling Lies, argues that the choice of games intentionally ignores those which use face models, too. "Can you imagine the legality of and just the optics of your game starring Lea Seydoux and Elle Fanning changing their faces? Making Conan O' Brien look like a catalogue model Chris Hensworth?" Funny thing with the DLSS stuff - they only picked games that don't use actor likenesses. Can you imagine the legality of and just the optics of your game starring Lea Seydoux and Elle Fanning changing their faces? Making Conan O' Brien look like a catalogue model Chris Hensworth? -- @mrsambarlow.bsky.social ( @mrsambarlow.bsky.social.bsky.social) 2026-03-17T13:46:18.119Z Not all game developer reactions are negative towards DLSS 5. In the announcement for DLSS 5, Todd Howard shared, "When Nvidia showed us DLSS 5 and we got it running in Starfield, it was amazing how it brought it to life. We've played it. We can't wait for all of you to do so as well." For positive reactions outside of Nvidia's official press release, Kazuya Okada, ex-Epic Games software engineer, wishes "we could use super-high-quality images and videos -- created in advance in the game engine's editor, ignoring the load -- as training data, and then apply them to actual upscaling somehow..." 3D model creator tarava777 says, "It's ultimately an optional process handled by the GPU, essentially a kind of 'aftermarket mod.' You can't do it without a GPU, and it can be turned ON/OFF at will. "This is just a tech demo, not something that's going to be released as-is. We need to take those aspects in stride with a cool head, right? Personally, I think advancements like this are in a realm where 'whether you like it or hate it, there's no stopping them anymore'." The 'optional' element of that is certainly notable. Developers who don't want to use it, or gamers who don't want to see it, can choose not to do so. And there's some excitement in favour of using it too. CG artist and the writer of the CG art blog "3Dnchu" argues, "It's amazing that this can be done in real time...This feels like an evolution that's gonna stir up a lot of buzz in all sorts of ways, huh." Some like actor Rahul Kohli mainly met the reaction with dumbfoundment, "DLSS5 has to be a joke right? Right? Guys?" Bruno Diaz, who worked as a senior writer and lead narrative systems designer for Failbetter Games, argues that prohibitive hardware requirements could stop it from making a splash anyway. "If DLSS 5 actually ships, it'll probably be so performance costly that few people will be willing to use it." The demo for the tech used two RTX 5090s, which would cost the average gamer around $8,000 to get ahold of. It is worth noting that these games are running at 4K max settings, so cranking resolution down to 1440p or even 1080p will lower power requirements. Nvidia says it has got the tech working on a single GPU (presumably just one measly RTX 5090), but that is still a lot for your rig to be able to run it. One can assume the tech has been revealed because Nvidia assumes gamers will actually be able to run it once it launches in the Fall, but we'll need to go hands-on to understand its hardware limitations. Ultimately, this tech is very new and still relatively early in its development. Most have not had the chance to go hands-on with it, and how it will perform in real-time will be the most telling aspect of its release. Game developers online, or at least the most vocal of them in the West, seem rather sceptical, and Nvidia's Grace Ashcroft AI makeover perhaps isn't the best selling point for it. That's not helped by the fact that DLSS 5's presentation is just over one minute of a two-hour GTC keynote presentation, with little explanation on how the model works, how much it can be tweaked, among many other things. Nvidia says devs have "artistic control" with DLSS 5, but it could go so much further to specify how. Whether or not this will really be the future of real-time rendering will largely depend on mainstream adoption (both from gamers and developers) and on how easy it is to run, and that's not something easy to grasp from initial reactions. With DLSS 5 set to launch later this year, that will be the real testing ground for whether or not people actually want it, and if that's better or worse for the games, as a result.
[78]
Nvidia Says DLSS 5 Haters Just Don't Get How The Gen AI Works
Earlier this week, Nvidia debuted DLSS 5, a new generative AI upscaling technology that purports to allow developers to get even closer to realizing their artistic visions by putting weird AI porn slop faces on popular characters like Grace from Resident Evil Requiem. People got mad. Real mad. Now Nvidia is responding by doubling down on slop-scaling. CEO Jensen Huang was asked about the online backlash at this week's GPU Technology Conference by Tom's Hardware. "Well, first of all, they're completely wrong," he responded, arguing that full control of the tech remains with the game developers. "DLSS 5 fuses controllability of the of geometry and textures and everything about the game with generative AI," Huang said. "It’s not post-processing, it’s not post-processing at the frame level, it’s generative control at the geometry level." He claimed developers who are on board with the tech, including at Capcom and Bethesda, can “fine-tune the generative AI†to shape their game's visuals how they wish. DLSS 5 adds a new generative AI-fueled layer of fidelity but doesn't take away "artistic control.†Huang may be responding to the way most people have interpreted and digested the tech online, which is essentially as a sort of "slop filter" that adds an uncanny layer of hyper-fidelity to games based on generic training models rather than the vision of the original artists. That's due at least in part to how demo videos show before and after comparisons that make characters' faces in games like Starfield and Hogwarts Legacy look like yassified stunt doubles. Huang's argument is that this is being calibrated at the developer level rather than via a post-processing algorithm. I'm not sure that logic will make much of a difference to any of the fans currently horrified by the tech's early results. After all, what's worse? DLSS 5 as a cheap slop-face filter or DLSS 5 as a fusion of generative AI into the fundamental geometry that defines how a game looks and feels? DLSS 5 might do wonders for more generic details like environmental lighting, but the tools are by their very nature somewhat random and unpredictable. Most people's only question so far about DLSS 5 is whether Nvidia can guarantee an option to keep it turned off.
[79]
NVIDIA's DLSS 5 Reveal Was a Mistake, Needs to Go Back to the Drawing Board, Says Eternal Darkness Dev Denis Dyack
Four days after the reveal of NVIDIA DLSS 5 at GTC 2026, we're still gathering reactions from the game development and modding community to the highly polarizing technology. Yesterday, we published some thoughts from RTGI author Pascal Gilcher. Today, we can share an exclusive comment from industry veteran Denis Dyack, as part of a much larger interview that will soon be published in its entirety on Wccftech. Dyack, known to gamers primarily as the creator of classics like Blood Omen: Legacy of Kain and Eternal Darkness: Sanity's Requiem, was quite critical of DLSS 5 as a whole while also noting that it could spell doom for triple-A developers (despite the fact that several were already on board with it by the time of the announcement), potentially reducing the usual visual gap with indie games. The recent reveal of NVIDIA DLSS 5 was a mistake and needs to go back to the drawing board. The current release seems to go beyond enhancing the look of a video game by fundamentally changing the game's art direction. Never mind the artifacting of extra wheels on cars and other AI art issues. The AAA industry is already in trouble, as it has become very difficult to justify production costs. Making things look spectacular is AAA games' greatest advantage over smaller budget games. If DLSS 5 is widely adopted, it will accelerate the AAA process's extinction, as it takes away the awe of what high-production art can bring to the table. Throughout the interview with Dyack, we had already discussed the wider AI subject at length. Dyack doesn't hate AI, but he does think it is "way overhyped" right now and should be used to do what humans cannot, rather than replace human work. I do think AI is a pretty big bubble right now and there's a lot of unjustified fear that AI is going to take over or remove creativity. My background is in AI; I put a running neural network in a game back in 1992 for my master's thesis in Computer Science. AI is a tool. It's not a replacement for people. Corporations saying "we're laying off X amount of people because of AI" are really going to regret that decision. You cannot finish a game with AI, or if you do, it's awful. Putting someone who comes from AI to run a games division isn't a first; people who run game divisions have often never done games before. But if there's going to be a large application of AI within games, I don't think it's going to be that fruitful. AI is way overhyped right now. Its ability to get results on its own is very low. As soon as someone says AI is going to save you money, you can almost universally assume that's wrong. Technology makes you more productive, but it takes more time. If you use a lot of AI, you need a lot more people and a lot more time. The idea of laying people off because of AI is antithetical to reality. Peter Moore recently said in an interview that he thinks all studios will eventually use AI in some form. Do you agree with that? All studios are already using AI. It's impossible not to. Steam's AI disclaimer is performative theater. It's in our compilers, our editors, our engines. If you use DaVinci Resolve for cutting videos, it's full of AI. That ship sailed about 15 years ago. The real question is: is generative AI going to replace people? I don't agree with that. It's probably going to take more people to run. [...] Right now, AI is being focused on doing things that humans already do well. I think the real win in AI is going to be focusing on things that humans can't do, like going through large amounts of data very quickly. I think we want to focus on things AI is uniquely suited for rather than trying to replicate human work. As mentioned earlier in the article, we'll publish the full conversation with Dyack in the coming days, covering his next game, Deadhaus Sonata, and other prominent industry issues. We're also still looking to bring you more opinions on the NVIDIA DLSS 5 technology. Stay tuned!
[80]
Nvidia CEO says gamers are "completely wrong" about DLSS 5
DLSS 5 probably didn't have the reveal Nvidia wanted. It has become a bit of a laughing stock among gamers, primarily for its ability to turn a lot of well-designed game characters into uncanny, AI recreations that feel almost the definition of AI slop. People aren't a fan, to be blunt. Nvidia CEO Jensen Huang, however, doesn't agree with that sentiment. When asked in a Q&A with Tom's Hardware about the criticism, he said that people are wrong about DLSS 5. "Well, first of all, they're completely wrong," Huang began. He then went onto explain how it's up to the developer on DLSS 5 in how they use it, clarifying it's not a post-processing technology. "The reason for that is because, as I have explained very carefully, DLSS 5 fuses controllability of the of geometry and textures and everything about the game with generative AI," Huang said. "It's not post-processing, it's not post-processing at the frame level, it's generative control at the geometry level...All of that is in the control -- direct control -- of the game developer," he said. This is very different than generative AI; it's content-control generative AI. That's why we call it neural rendering." According to Huang, it'll be entirely up to a developer to fine-tune the AI, and see what the end result is when they use DLSS 5 with their characters. Whether this will turn people's opinion around is hard to say, as a lot of damage has already been done on the first impression.
[81]
NVIDIA DLSS 5 Adds Real-Time Neural Lighting to Games Raising New Questions
Nvidia's DLSS 5 represents a significant advancement in gaming graphics, combining real-time neural rendering with artificial intelligence to enhance visual quality. According to Daniel Owen, the system processes motion vectors and color data to generate realistic textures, dynamic lighting and lifelike materials. However, these improvements come with challenges, such as high hardware requirements and concerns over AI-driven enhancements potentially overshadowing artistic intent. Owen notes that developers can address some of these issues through features like adjustable intensity settings and masking options, though these solutions may disproportionately benefit larger studios with more resources. Explore how DLSS 5 balances realism with artistic stylization, including its implications for creative control in game design. Gain insight into the widening divide between AAA and indie developers as a result of resource-intensive technologies. Additionally, understand the technical challenges posed by latency and hardware demands and how these factors influence both development workflows and player experiences. DLSS 5 builds on the foundation of its predecessors, shifting its focus from performance optimization to a comprehensive enhancement of visual quality. At its core, the technology uses AI to process motion vectors and color data from a game, allowing the creation of highly realistic textures, lighting and materials in real time. This neural rendering approach allows the AI to interpret and recreate intricate scene elements, such as: The result is a gaming experience that blurs the line between pre-rendered cinematics and real-time gameplay. Nvidia's AI model is trained to adapt to complex scene dynamics, making sure that visual elements respond seamlessly to player actions and environmental changes. This level of detail mirrors the quality of visual effects seen in Hollywood productions, setting a new benchmark for gaming graphics. One of the most debated aspects of DLSS 5 is its potential influence on artistic intent. Critics have expressed concerns that the technology might homogenize game visuals, applying a universal "filter" that could override a developer's creative vision. Nvidia has addressed these concerns by offering developers tools to customize the AI-generated output, including: These tools provide flexibility, allowing developers to retain control over their artistic direction. However, they also present challenges, particularly for smaller studios with limited resources. Fine-tuning these settings requires time and expertise, which may not be feasible for all developers. This raises questions about whether DLSS 5 will primarily benefit larger studios with greater technical capabilities, potentially widening the gap between AAA developers and indie creators. Explore further guides and articles from our vast library that you may find relevant to your interests in NVIDIA. The advanced capabilities of DLSS 5 come with significant hardware requirements, which could limit its accessibility to a broader audience. Early demonstrations have showcased the technology running on high-end setups, such as dual RTX 5090 GPUs, far exceeding the capabilities of most consumer gaming systems. This has sparked concerns about whether DLSS 5 will remain a luxury feature for those with top-tier gaming rigs. Nvidia has acknowledged these concerns and stated that future updates aim to optimize the technology for lower-end hardware. However, the timeline for these optimizations remains uncertain. Until then, the steep hardware demands may restrict DLSS 5 to a niche audience, potentially slowing its adoption across the gaming industry. Despite its impressive advancements, DLSS 5 is not without its challenges. The additional AI processing required for neural rendering introduces latency, which could impact gameplay responsiveness. This is a critical concern, particularly for competitive gaming, where even minor delays can affect performance. Nvidia is reportedly working on latency reduction techniques, but achieving seamless performance remains a significant hurdle. Another challenge lies in the "uncanny valley" effect, especially in character rendering. While hyper-realistic visuals are visually stunning, they can sometimes feel unnatural, creating a sense of disconnection for players. Striking the right balance between realism and artistic stylization will be essential to maintaining player immersion and making sure that the visuals enhance, rather than detract from, the gaming experience. Several high-profile games, including Starfield, Hogwarts Legacy, and Assassin's Creed Shadows, have already announced support for DLSS 5. These early integrations demonstrate the growing interest in AI-enhanced graphics across the gaming industry. However, the extent to which developers embrace DLSS 5 will depend on several factors, including the costs associated with implementation and the availability of resources to optimize its use. Beyond individual games, DLSS 5 represents a broader shift toward neural rendering in gaming. Competitors such as Sony and Microsoft are likely to explore similar technologies, further driving innovation in the field. However, this shift also invites skepticism. Some industry observers question whether the long-term benefits of AI-driven graphics outweigh the challenges they introduce, particularly in terms of accessibility and artistic integrity. DLSS 5 is undeniably a bold step forward in gaming graphics, showcasing the potential of real-time neural rendering to deliver photorealistic visuals. Its ability to transform lighting, textures and materials in real time underscores Nvidia's advancements in AI and graphics processing. However, the technology also presents significant challenges, from steep hardware requirements to concerns about artistic integrity and latency. As the gaming industry continues to evolve, the adoption and reception of DLSS 5 will depend on how effectively these challenges are addressed. For developers and players alike, the promise of Hollywood-level visuals in real-time gaming is both exciting and complex, marking a pivotal moment in the future of interactive entertainment. Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.
[82]
From Breakthrough to Backlash: Nvidia's AI Graphics Trigger 'Slop' Debate
Nvidia's DLSS 5 Ignites Industry-Wide Debate Over AI-Driven Game Visuals! NVIDIA's latest AI-powered graphics technology, DLSS 5, is getting mixed reactions from the gaming world. The company calls it a big step toward more realistic visuals. But many gamers and developers are not fully convinced. Some are even calling its output "AI slop," saying it looks over-processed and lacks originality. This criticism comes at a time when is promoting DLSS 5 as a major breakthrough. The company sees it as the next big thing after ray tracing. But early reactions show that people are unsure. The technology may not just change how games look, but also how they feel. The concern is not only about graphics. It is also about control and creativity. Many are asking how much role AI should play in making games.
[83]
Even developers were taken aback by Nvidia's DLSS 5 announcement
Nvidia's unveiling of its new graphics tech DLSS 5 promptly sparked an unusual mix of excitement and ridicule. Nvidia sees the upcoming version of its Deep Learning Super Sampling technology as a game-changing moment for graphics, using generative AI to add lighting detail, materials and other visual touches to enhance frames in real time. However, some see the tech as a generator of generic AI slop that will take game art out of the hands of developers. Nvidia's demo showed DLSS 5 'improve' Capcom's Resident Evil Requiem and Bethesda's Starfield. But while Bethesda stresses that its artists will remain in control, Capcom seems to have known nothing about it. The application of DLSS 5 to Resident Evil Requiem's Grace Ashcroft was one of the most controversial examples that Nvidia has demoed. It quickly sparked internet memes poking fun at how Nvidia's tech completely changes the character, subjecting her to AI beauty standards. Many suggest the tech's 'yassification' of the game graphics creates an uncanny and generic polished look. Nvidia cited Ubisoft and Capcom as partners whose games would support DLSS 5, but it seems like that was news to them, or at least to developers working with them. According to Insider Gaming, developers at Capcom were taken aback by the announcement. Capcom has traditionally opposed the use of AI. Now some staff at the publisher are reportedly now concerned that the DLSS 5 announcement could lead Capcom's leadership to change their view on generative AI in games. Insider Gaming also cites an unnamed Ubisoft developer as saying "We found out at the same time as the public". As for Nvidia itself, our sister site Tom's Hardware was first to report Nvidia CEO Jensen Huang's dismissal of the DLSS 5 criticism. Speaking at GTC, he said critics were "completely wrong." He said: "As I have explained very carefully, DLSS 5 fuses the controllability of geometry and textures and everything about the game with generative AI" and insisted that developers will be able to "fine-tune the generative AI". "It's not post-processing at the frame level, it's generative control at the geometry level," he said, clarifying: "This is very different than generative AI; it's content-control generative AI. That's why we call it neural rendering." Bethesda has also emphasised that creative control. Replying to a post by Digital Foundry on X, it said: "This is a very early look, and our art teams will be further adjusting the lighting and final effect to look the way we think works best for each game. This will all be under our artists' control, and totally optional for players." But some gamers say they "want the upscaling but not the AI faces". That would require more customisability than the current option of turning DLSS on or off.
[84]
Death Stranding 2 developer offers nuanced take on Resident Evil Requiem's DLSS 5 makeover: "No, no, no, no, no, no, no, no, no, no"
Senior animator Mike York has worked on a range of hits from the divorced man's GTA 5 to the divorced man's Death Stranding 2, so he's seen a lot of things in video games, but never Grace Ashcroft after having undergone liposuction. That's the kind of image Nvidia's new AI-powered DLSS 5 rendering model brings to Resident Evil Requiem, one of the titles in which it will debut in the fall, and York can't stand it. He offers his perspective as an industry veteran in a new stream on his YouTube channel, York Street Gaming. He reacts to games tech site Digital Foundry going hands-on with DLSS 5 in a March 16 video and has a very different gut reaction than the publication - though Digital Foundry has since expressed regret for the way it first enthused about DLSS 5. With his brow furrowed deeper than the ocean floor, York attempts to analyze the now infamous image of Resident Evil Requiem protagonist Grace looking sandblasted. He says, "Whoa, hold on," after DLSS 5 takes away Grace's normal video game character face and replaces it with a TikTok filter. "No, no, no, no, no, no, no, no, no, no," York decides. "No. This isn't just some lighting, dude. What the f-... I'm telling you, this is like a complete AI re-render." Indeed, Nvidia explains in its DLSS 5 announcement post that it "introduces a real-time neural rendering model that infuses pixels with photoreal lighting and materials" - what CEO Jensen Huang admits in more explicit terms is "the GPT moment for graphics." "Who even is that? That's a different girl," York exclaims. "You know why I can tell [...] - look, her eyes are no longer looking, like, correctly. That one eye is looking over here, and one eye is looking there." The power of technology.
[85]
NVIDIA outlines what developers will be able to change with DLSS 5
TL;DR: NVIDIA's DLSS 5, launching in Fall 2026, uses generative AI to enhance game visuals by reinterpreting images, sparking gamer backlash over altered character appearances. Developers can control the upscaling intensity and image adjustments, preserving artistic intent despite concerns about AI-driven changes in graphics fidelity. NVIDIA unveiled DLSS 5 at GTC 2026, and the new technology has been met with immense pushback from gamers, who are concerned their video games will be poisoned with "AI slop" currently taking over numerous social media platforms. DLSS 5 is on its way to release sometime in Fall 2026, and during the announcement, NVIDIA's CEO Jensen Huang described this next generation of DLSS as the "GPT moment for graphics," a reference to the release of the first AI model. In its announcement of DLSS 5, NVIDIA states that the next-generation AI upscaler will be able to "bridge the divide between rendering and reality" by analyzing light sources, characters, hair, shadows, skin, and other aspects of a single frame. Then, DLSS 5 processes real-time game data and produces an image that, in some cases, such as the one NVIDIA used to demonstrate the technology, can change an image's appearance almost entirely. The discrepancy between the original image and the result when DLSS 5 is turned on is evident in many of the examples NVIDIA showed, producing a result that is stunningly different, which has sparked a steadfast, defiant stance by gamers against the technology's adoption. However, there are some things to point out. NVIDIA has outlined how developers will use the technology, if at all: developers will be able to tune the overall intensity of the upscaling that DLSS 5 applies, adjust the color, and adjust any masking or enhancements applied to specific areas of the image. So, if this is true, developers will be able to precisely adjust the in-game image, meaning NVIDIA's DLSS 5 won't completely abandon their original artistic design choices. The pushback on the new technology appears to be mostly driven by the examples NVIDIA used in its press materials, not by the technology itself. All of these examples feature faces that have been drastically altered, and in some cases don't even really resemble the originals. These images are bordering on the uncanny valley, a psychological phenomenon that occurs when humans see human-like figures that appear almost, but not quite, human. This phenomenon elicits unease, eeriness, or even a disgust response. While there has always been some pushback against AI entering the video game space, the community's response to DLSS 5 in particular was immediate and intense. I believe this is because NVIDIA has shown a stark difference in AI-altered human characters, and the big difference between DLSS 4 and DLSS 5 is that DLSS 5 is using generative AI, or is reinterpreting and generating what should be within any given image, while DLSS 4 is reconstructing what is already in the game without any external input. DLSS 4 = reconstruction / DLSS 5 = generation. Personally, I believe a degree of skepticism is always healthy when approaching a new technology, but to dismiss it entirely before we have even seen it used in a game (besides promotional material) seems absurd, especially considering developers will have full control over the intensity of DLSS 5 in their games. With that in mind, wouldn't the DLSS 5 image the developers end up shipping also be their "artistic design choice", which would render the argument that "DLSS 5 is going to ruin the way the developers of the game intended their title to look like" moot? Only time will tell, but what we know for sure is that NVIDIA more than likely didn't anticipate gamers holding pitchforks and torches when they unveiled DLSS 5.
[86]
Nvidia CEO Jensen Huang Also Doesn't Like AI Slop, and Says DLSS 5 Control Lies With The Artists
"DLSS 5 is conditioned by the artistry of the artist" apparently. Ever since Nvidia announced DLSS 5 last week, the reaction has been, well, controversial. And while Nvidia CEO Jensen Huang initially came out and said gamers were wrong for their distaste of the technology, he seems to recognize why people didn't like the trailer now. Jensen Huang went on the Lex Fridmann podcast to talk about AI technology, and for a moment the topic changed to gaming. When Fridmann asked the Nvidia CEO about the controversy around DLSS 5, he said "I think their perspective makes sense. And I could see where they're coming from, because I don't like AI slop myself." He then clarified, saying "You know, all of the AI-generated content increasingly looks similar and they're all beautiful, so I'm empathetic towards what they're thinking." While that sounds like Huang was walking back the technology, he insists "that's just not what DLSS 5 is trying to do." Instead, he insists that DLSS 5 is "3D conditioned, 3D guided. It's ground truth structure [is] data guided." And that the model respects the geometry and the "artistry of the artist" and that it "enhances but does not change" the scene. One of the ways he suggests Artists have control over DLSS 5 is that it's an open model, so game developers can train their own models to control what the output will ultimately look like. Ultimately, Nvidia's intent is to give artists "the tool of generative AI" and that they can "decide not to use it." Either way, DLSS 5 won't be available to the public until at least Fall 2026, so Nvidia has time to really tune what it wants the technology to be able to do, and how developers and artists will be able to implement it into their PC games. Hopefully the final product is a little less ... extreme and will be able to run on graphics cards that people can actually afford.
[87]
Bethesda Responds To DLSS 5 Backlash, Only Makes It Worse
The Starfield dev tried to quell fan concerns about the new tech, but it didn't go over very well Yesterday, Nvidia unveiled DLSS 5: an “AI-powered breakthrough†in visual upscaling tech that, to put it lightly, looks like garbage. The internet’s reaction to the tech has been less than favourable, which is probably why Bethesda and Nvidia are in full damage control mode, with the former attempting to downplay the tech as “totally optional†and under their “artists’ control.†Bethesda’s 2023 action RPG Starfield was one of the key stars of Nvidia and Digital Foundry’s DLSS 5 tech demonstration, alongside the yassified versions of Resident Evil Requiem’s Grace and Leon. That probably explains why the Bethesda Game Studios X account thought it necessary to weigh in on the negative response to Digital Foundry’s tweet, especially considering that Nvidia published a clip of Todd Howard (assumedly at gunpoint) raving about how he thinks DLSS 5 is “amazing.†“Appreciate your excitement and analysis of the new DLSS 5 lighting here,†reads Bethesda Game Studios’ reply. “This is a very early look, and our art teams will be further adjusting the lighting and final effect to look the way we think works best for each game. This will all be under our artists’ control, and totally optional for players.†And thus, the internet was quelled, and everyone thanked the poor social media intern who was forced to post this reply. Just kidding, obviously; it only exacerbated the situation. “Nah, fuck your excitement and fuck this slop,†reads one reply. “You know another way to improve the look of your games? Actually letting your artist and developers do their job and not an AI,†reads another. Bethesda isn’t the only one trying to squash the pushback, however, as Nvidia’s reply on the official DLSS 5 announcement video on YouTube echoes Bethesda’s statement, also making a point of noting that game developers have “artistic control†over how they implement DLSS 5. “Important to note with this technology advance - game developers have full, detailed artistic control over DLSS 5's effects to ensure they maintain their game's unique aesthetic. The SDK includes things like intensity, color grading and masking off places where the effect shouldn't be applied. It's not a filter - DLSS 5 inputs the game’s color and motion vectors for each frame into the model, anchoring the output in the source 3D content.â€
[88]
NVIDIA DLSS 5 Unveiled: AI Rendering Brings Cinematic Realism to Gaming
The tech giant wants to reduce the gap between how games look today and the cinematic quality seen in films. DLSS stands for Deep Learning Super Sampling, an AI-based upscaling tool developed by NVIDIA that first appeared in 2018. The company NVIDIA introduced frame generation as an additional feature to help users achieve higher frame rates during demanding gameplay. The introduction of DLSS 4.5 earlier this year enables artificial intelligence technology to generate additional display pixels for use in visual content. The introduces a new direction for the product. The system employs a neural rendering method that processes every pixel of the image during real-time operation. The system combines traditional rendering methods with analysis of color, motion, and depth information obtained from the game engine. The system then uses advanced lighting methods to enhance materials and environmental aspects of the scene.
[89]
The DLSS 5 memes are in full swing, so here are the ones that tickled me the most
During the Nvidia GTC keynote, company CEO Jensen Huang reiterated the company's commitment to gaming, stating that "This is the house that GeForce made." However, the company is still full steam ahead on AI, leveraging the tech to "revolutionise how graphics are made" in the form of DLSS 5. We got our first look at the tech last night, and the response has been mixed to say the least. Resident Evil Requiem was one game used to showcase the upscaling tech, and while I personally don't mind seeing Leon S. Kennedy with even more scruff, the tech makes Grace Ashcroft look like a totally different person, with distinctly higher cheekbones and fuller lips. Nvidia has said devs will retain 'artistic control' with this tech but, in the case of Grace, it's hard not to feel like DLSS 5 has daubed her with AI beauty standards. It's such a dramatic shift that plenty of PC gamers have been poking fun at DLSS 5, dismissing it as akin to a 'yassification filter'. Case in terrifying point, I stumbled across this Bluesky post that yassifies Requiem's stalker enemy, The Girl. Now that I've seen it, you can't unsee it either -- you're welcome. Elsewhere online, the memes have been similarly on point. In a similar vein to that downright haunting take on The Girl, developer Neal Agarwal reimagined The Password Game with a distinctly fleshy twist. After that, I think you've earned a palette cleanser. Among Us developer Innersloth was quick to playfully imagine what DLSS 5 could do with its game. Similarly, the official Cult of the Lamb X account also had fun reimagining the game's similarly cartoon-y art style. Meanwhile over on Reddit, r/PCMasterRace is perhaps predictably having a whale of a time dunking on DLSS 5. Besides a look-in from Handsome Squidward, an unfavourable comparison to Sonic the Hedgehog's original movie design, and even a throwback to Ecce Mono, the community turned their gaze to r/Nvidia. Alleging mods were deleting comments en masse, Redditors seized on yet another obvious DLSS 5 punchline. Thanks Nvidia from r/pcmasterrace Now, the case could be made that the tech's genuine improvement to environmental lighting is getting lost among all the memes. Furthermore, leveraging AI to get more out of older hardware, especially in the midst of a memory supply crisis that has made PC components even more expensive, is no small thing either. To be fair to the tech, DLSS 5 arguably had a better showing in EA Sports FC -- where the character models are even more closely based on real people in the form of recognisable sports persona. Unfortunately, the tech's implementation of photorealism in Starfield did not play well with that game's stiff conversation animations, creating a downright uncanny effect in my humble opinion. DLSS 4's multi-frame generation felt like a genuine game-changer. DLSS 5 is attempting to change the game again, but so far that's looking far more literal than many would like. Here's hoping the implementation will be refined over the coming months, and I'll recognise Grace Ashcroft next time she stumbles over her own feet.
[90]
"It's Just So Lame" - RTGI Modder Thinks DLSS 5 Is Impressive Tech But Shares AI Slop Sentiment to a "Big Degree"
The reveal of NVIDIA DLSS 5 certainly didn't leave anybody unmoved. Between people who praised it as one of the most impressive demos and as a next-generation moment for graphics, and the strong backlash of many who view it as little more than an AI filter corrupting the original artistic vision of a game, the technology's debut at GTC 2026 has undoubtedly riled up gamers on both sides of the argument. Amidst all the noise created by such a heated public debate, it may be hard to find nuanced takes. That's what I sought when I reached out to Pascal Gilcher, also known as Marty McFly, the renowned rendering engineer, artist, and modder who is best known for creating the first and arguably best RTGI shader for ReShade. Since RTGI's original release back in May 2019, the shader, through its various iterations, has been used to enhance the graphics of older PC games with screen-space ray traced effects. Eventually, it became so popular that NVIDIA added it to the GeForce Experience Freestyle suite of graphics filters. The GeForce company was already familiar with Gilcher's work, who was previously an NVIDIA employee. Needless to say, this makes him a great choice to provide us with a technically sound and relevant opinion on DLSS 5. As you'll read in the full quote below, Gilcher is impressed with the technology on one hand, but also shares the AI slop sentiment others have expressed, adding that there is "zero elegance" to this kind of approach. I think it's an impressive tech, but I too share the "AI slop" sentiment to a big degree. I doubt that these models, given the way they work, can ever be bullied to not cause wrong faces and such things. They are required to produce invasive results... yet not be invasive. The way I suspect it works is this: they need some sort of prior to train the model on. This would need to be unrealistic images and photorealistic ones. But... how does one create a wide variety of such image pairs? The answer is a GAN. They likely have a massive collection of game screenshots, and a lot of pictures from photos. That's also where that "Instagram" style that every face gets comes from. They train a large model to distinguish real from fake. And then they train a smaller model to turn unrealistic images into realistic ones and fool the large model. They can use an insanely large model for that as well, since the only goal is to get 1:1 pairs of realistic and unrealistic images, then they can distill a smaller model off that that learns to replicate that. In general, I find all AI lately extremely boring. I know that there is lots of research on ML models and I have experience with that, but even with a strong math background, it feels like wandering in the dark. Zero elegance with this, just throwing compute a problem from different angles. It's just so lame, which seems the zeitgeist of lots of tech at the moment. We're actively seeking to engage with more developers, engineers, and artists to learn what they think of DLSS 5. If you're in the industry or are a modder and want to share your thoughts on it, feel free to reach out via email. In case you're wondering, Gilcher is still very active on the modding scene. While there are unfortunately no updates on the promising ReShade path tracer showcased over two years ago in The Elder Scrolls V: Skyrim and the subject of my previous interview with Gilcher, the modder has just released a completely rewritten iMMERSE Pro: Depth of Field shader, which he claims visually beats Unreal Engine's own Depth of Field while staying competitive in performance thanks to a very efficient software-based implementation of DirectX 12's Variable Rate Shading (VRS). Gilcher also promises an "eerily accurate" autofocus that consistently locks onto the intended subject. You can take a look at the Depth of Field shader in the embedded video below; it is available to Patreon subscribers from the "RAY TRACERS" tier (€4.50 monthly) upward.
[91]
Video game companies are making fun of Nvidia's new DLSS 5 tech
Overnight, we reported that Nvidia had unveiled DLSS 5, which, among other things, alters faces in games by having AI reinterpret them, something the company claims is a game-changer. Gamers around the world, however, weren't quite as impressed by the video game characters' new, generic looks, and the criticism was swift, and as expected, a flood of memes followed. This has continued throughout the day, and now it's not just gamers making fun of Nvidia's AI-generated faces, but game companies have jumped on the bandwagon as well. Understandably, Nvidia's attempt to hide their work with AI-generated visuals hasn't gone over well, and as a result, social media is now full of companies in the industry showcasing their products with and without DLSS 5.
[92]
DLSS 5 is Nvidia's boldest graphics leap yet, and its most controversial
Nvidia has taken the wraps off its next leap in graphics tech, and while the promise is sharper, richer game worlds, helping those who don't have monster gaming rigs to experience the high-end visual fidelity, not everyone is thrilled about what that might mean for how games actually look. Why? Because it uses generative AI to override existing art direction in favour of filtered, commonplace AI art. The debate erupted after Nvidia revealed DLSS 5, a new version of its Deep Learning Super Sampling technology that takes AI-assisted rendering further. In simple terms, it's designed to enhance frames in real time using neural networks, adding extra lighting detail, materials, and subtle visual touches on top of what the game engine already produces. For Nvidia, it's a natural evolution of a technology that has steadily grown from a performance booster. When DLSS first appeared, the idea was fairly straightforward: render the game at a lower resolution, then use AI to reconstruct a sharper final image. It helped push frame rates higher without forcing players to sacrifice visual quality. Since then, the feature set has expanded to include frame generation and smarter reconstruction models. With DLSS 5, the focus shifts again, this time toward AI-assisted scene enhancement, where the system analyses each frame and enriches the lighting and material details before they reach the screen. Real-time graphics have long pursued the holy grail of film-quality realism, and neural rendering could be a magic bullet that brings it to everyone, including developers, affordably. If it works as advertised, it may enable developers to achieve more complex lighting and surface detail without crushing performance or raising costs. But the reveal has already sparked a wave of scepticism among parts of the gaming community. Shortly after the announcement, clips circulating on social media drew criticism from players who felt the effect looked less like improved rendering and more like an AI overlay applied to the original game geometry. Some gamers went so far as to call it 'AI slop', arguing that it risks smoothing over the deliberate look crafted by game artists. The Nvidia demo didn't help, as it showed how DLSS 5 could 'improve' games' visuals with a mix of partner developer examples, including Capcom's Resident Evil Requiem and Bethesda's Starfield. In RE Requiem, we see improvements to the Grace character, with added 'realism', but it's the kind of uncanny valley AI is renowned for - her hair is softer, lips redder and fuller, skin smoother. Sure, it looks 'better', but it hardly represents an FBI character who is fighting for her life against the undead. The grit, hustle, and stress of her character design are removed in favour of generic glamour. Game artist Karlo Ortiz wrote on X: "Please take it from an artist, all of it being so detailed kills the balance of the image, making cinematic games lit terribly, bringing too much noise that kills focal points of the image, and turns interesting characters into yassified same ol." Another artist, Dave Rapoza, chipped in, tongue in cheek, writing: "I don't think you understand, the public doesn't want art direction, they want to play "high res photos - the game" starring their AI girlfriends as protagonists - not carefully curated ideas that create a fingerprint for an IP - all IP will look the same, I can't wait for meta glasses to use this tech so everyone on earth is my AI girlfriend." Game visuals are rarely accidental, and colour palettes, lighting styles and surface detail are usually carefully tuned to support a particular mood or art direction, sometimes due to tech constraints, sometimes for artistic meaning - Marathon could load up on detail be chooses a graphic design look, Romeo is a Dead could be photorealistic but opts for a mix of artistic styles to tell its story. If an AI model starts modifying those elements automatically, some players worry the final result could drift away from the developer's intended look. In a statement, Jun Takeuchi, executive producer and executive corporate officer at Capcom, said: "At Capcom, we strive to create experiences that feel cinematic, compelling and deeply believable - where every shadow, texture and ray of light is crafted with intention to enhance atmosphere and emotional impact. DLSS 5 represents another important step in pushing visual fidelity forward, helping players become even more immersed in the world of Resident Evil." For me, good art direction stands the test of time. Long after technology has moved on, great design and innovative creative choices make some games stand out and last. Think of Persona 5, Nier: Automata, Okami, and even the original PS2-era GTAs with their pop art aesthetic, which is a style Rockstar has evolved, not made photoreal; all the games had a style designed for a reason and with purpose. That debate taps into a wider anxiety around generative AI creeping into creative pipelines. For critics, DLSS 5 represents another step toward algorithms reinterpreting art originally designed by humans. Of course, DLSS itself remains hugely popular. Many PC players rely on it to squeeze higher performance from demanding games, and recent versions have earned praise for how clean their reconstructed images look compared to older upscaling methods. So the reaction to DLSS 5 is likely to remain mixed. Some players will welcome anything that pushes graphics closer to photorealism, while others will watch carefully to see whether the technology enhances a game's visual style or quietly rewrites it. What do you think? Is this a good thing for everyone? After all, is it optional to enable DLSS 5 and use the 'AI filter', or should developers' and artists' creative decisions be respected? Take our poll and have your say.
[93]
"It's content-control generative AI" - Nvidia has DLSS'd too close to the sun, and I'm not convinced Jensen Huang will listen to anyone's DLSS 5 AI slop concerns
"DLSS 5 fuses controllability of the of geometry and textures and everything about the game with generative AI." Nvidia clearly has a DLSS 5 dilemma on its hands, and I'm absolutely with everyone who thinks the demo looks like AI slop. You really don't have to be a graphics card expert to know that something beyond the suite's usual AI upscaling is tainting the tech, and while Jensen Huang has gone into full "No, it's the children that are wrong" mode and probably won't listen to the feedback coming from every angle, he has at least confirmed some obvious truths. I'm not remotely surprised that Jensen Huang is kicking back at DLSS 5 concerns, but I am amused at his press briefing responses at GTC 2026. When asked by Tom's Hardware about the criticism, the CEO decided to go full Mr. Skinner meme, saying, "Well, first of all, they're completely wrong," before attempting to downplay the GPU tool backlash in the same tone as a frustrated retail manager. "The reason for that is because, as I have explained very carefully, DLSS 5 fuses controllability of the geometry and textures and everything about the game with generative AI," explains Huang before stating it "doesn't change the artistic control." The latter is a response to everyone with eyes who thinks Resident Evil Requiem's Grace with DLSS 5 looks nothing like the original character. Huang isn't really helping build a DLSS 5 defense by mentioning generative AI, as it's that specific tech that's causing upset. In the "photorealistic" demo examples, characters like Grace frankly look like they've been yassified in the worst possible way, with facial elements being warped to fit some sort of beauty standard that's supposed to represent realism. Rather than addressing the look of DLSS 5, Huang seems more concerned with tackling the notion of choice. "All of that is in the control -- direct control -- of the game developer," proclaims the CEO. "This is very different than generative AI; it's content-control generative AI. That's why we call it neural rendering." Patronising tone aside, Jensen is confirming two things here. The first is that while there's room for technological pedantry, DLSS 5 does use gen AI to alter graphical elements like faces. It's not just a filter or even a lighting technique similar to Ray Reconstruction; we're talking about geometry that's generated using a model. Again, I personally didn't need Huang to really confirm that, as the visual results of DLSS 5 reek of gen AI. I'll admit, I'm absolutely not the target audience when it comes to photorealistic games, but upon catching the demo while in bed with the flu, I was instantly asking myself whether the realism was actually in the room with us or if the whole situation was just a fever dream. Rather than just calling an AI spade an AI spade, I decided to send the DLSS 5 demo to my photographer and videographer friend, Duncan Lorthioir. While I'm fairly confident that Grace Ashcroft just doesn't look like that and that the tool was changing facial features and adding makeup, I wanted some confirmation that the lighting scenario wasn't accurately changing geometry. In response, Duncan admitted that the DLSS 5 results "may seem more realistic at first glance," but highlighted that there's an "uncanny valley feeling going on." He also highlighted that, based on the lighting environment in the "DLSS 5 on" example, the lighting "should have changed more on the character's face," but instead feels like a "heavily retouched editorial picture." Am I really shocked that the company at the forefront of generative AI hardware is now trying to use its consumer GPUs to push the tech? No, but I am deeply disappointed that Nvidia is stonewalling the concerns of gaming PC players. Yes, DLSS has historically used AI in Super Sampling and Frame Generation to boost fps, and while even graphical enhancements like Ray Reconstruction can add something to an experience, DLSS 5 will pivot the suite to changing the actual look of games, even if it is being pitched as "optional." What I will say is that Nvidia fuels its DLSS ambitions based on user stats. Simply put, if it presents a strong number of players who keep DLSS 5 on for games, it will use that to back up its GPU feature decisions. I'm not saying you should nuke the upscaling settings that are helping you hit specific frame rates and provide a smoother experience, but if you are still playing Resident Evil Requiem or any game that first gets the option, switching it off may help get the message across. * Graphics cards at Amazon * Gaming PCs at Amazon Building a new rig? Swing by the best CPUs for gaming for more vital components. Alternatively, swing by the best gaming handhelds for ways to take your Steam library on the go.
[94]
"It's Re-Rendering the Game!" - It Turns Out Game Artists Don't Love DLSS 5, Despite Nvidia's Claims
Nvidia revealed DLSS 5 earlier this week, working neural rendering into its AI technology suite for graphics cards. The company claimed that game developers were on board, despite some of the drastic changes it seems to make to the game's aesthetic - especially Resident Evil 5. Both Assassins Creed Shadows and Resident Evil Requiem were used to show off the new technology, but it seems like the art teams for Ubisoft and Capcom, respectively, didn't know about it until the trailer debuted, with a developer from the former telling Insider Gaming, "we found out at the same time as the public." Capcom's art team was particularly shocked and worried by Resident Evil's trailer, given the company's anti-AI stance with previous Resident Evil games. The Capcom art team definitely wasn't alone in its reaction to the technology. Animator Mike York, who has worked on wide variety of games, including Death Stranding 2, streamed a reaction to Digital Foundry's preview of DLSS 5. In it, he paused at several moments to point out some of the more radical changes that the neural rendering tech made to the games. For instance, York points out that with DLSS 5 enabled, Grace Ashcroft's eyes point in two different directions, along with extra details, like wrinkles in her lips, and an ear with a completely different shape. But more than that, York contends that rather than just fixing the lighting and some "materials" like Nvidia has claimed: "The geometry hasn't changed in the computer," York said. "So you're playing the game and the geometry isn't being changed, he's right, but he has to be careful on how he phrases it. The geometry isn't being changed, but what's happening is that it's getting painted over, sort of. Every single frame is being painted over, it's actually not showing the real geometry anymore." It appears, then, that the new DLSS 5 algorithm is using generative AI to generate entirely new images that are anchored to the original frames. That would account for the strange aesthetic differences and changes in details in background scenery. Nvidia does claim that developers maintain control over the intensity of the model. When I asked Nvidia for more information on these controls, it told me: "The operation of DLSS 5 honors the artistic intent. In addition, by providing developers with detailed controls such as intensity and color grading. Artists can use these controls to adjust global contrast, saturation, and gamma, and determine where and how enhancements are applied to maintain the game's unique aesthetic. Developers can also mask specific objects or areas to be excluded from enhancement" From Nvidia's comment, it seems like developers and artists will be able to stake their claim, as it were, in certain assets to keep them from being altered by the algorithm. However, until we see these controls in action, it's not clear how much control is actually there. DLSS 5 is still a long way off from release, but many people are already looking at it as an unwelcome change to the art of video games. The backlash was so strong, in fact, that it prompted Nvidia CEO Jensen Huang to come on stage at GTC 2026 and claim that people were "wrong" about their distaste for the technology. Either way, DLSS 5 will likely change a lot before it makes its way to graphics cards this fall, so maybe Team Green will rein in some of its excesses. We won't know for a few months, though.
[95]
DLSS 5 is a game-changer, but this first look is controversial
NVIDIA's surprise DLSS 5 announcement at GTC 2026 has been controversial, with several comments from the wider gaming community and media comparing it to an Instagram-like AI filter that beautifies character models. This reaction mostly stems from one key example showcasing the character Grace in Capcom's Resident Evil Requiem, where there's the impression that DLSS 5 is not only changing the character's look but also adding additional details like makeup. "NVIDIA and Bethesda have a long history of pushing gaming graphics and innovation forward, and DLSS 5 represents the next major step in that journey," said Todd Howard, studio head and executive producer at Bethesda Game Studios. "With DLSS 5, the artistic style and detail shine through without being held back by the traditional limits of real-time rendering. We're excited to work with this new technology and look to bring DLSS 5 to Starfield and future Bethesda titles." By the sounds of that, DLSS 5 will be coming to The Elder Scrolls VI. Beyond the dramatic changes that more lifelike, photoreal lighting brings to character faces, changes to environments are also a key part of the DLSS 5 difference, resulting in an almost film-like look. However, so far, the discussion surrounding DLSS 5 has focused primarily on character faces. Even though something as simple as more realistic hair and shadow detail around a character's eyes and mouth can transform a scene, which DLSS 5 proves, the initial reaction has been met with very vocal criticism. The critical responses range from a complete rejection of all things AI to concerns about whether it fundamentally changes the original artistic intent or vision. Per the Hardware Unboxed post below, one of many critical responses from the industry highlights the use of a DLSS 4.5-style general model as a potential issue for the technology, potentially making every game that uses the tech look the same. Regardless of which side of the argument you fall on, DLSS 5 is poised to be a game-changer for PC gaming and another step forward for game visuals when it launches later this year.
[96]
New Nvidia DLSS Tech Gives Characters AI Slop Faces - Kotaku
Finally, all your video games can look terrible and artificial Today, GPU maker and tech giant Nvidia revealed DLSS 5, a new version of its existing upscaling technology that, based on the first images shared by the company, will slap a nice coat of AI slop onto in-game faces. On March 16, Nvidia announced the next evolution of DLSS, calling DLSS 5 the company's "most significant breakthrough in computer graphics since the debut of real-time ray tracing in 2018." Nvidia's CEO Jensen Huang, DLSS 5 called the new tech the "GPT moment" for video game graphics and said that it blends "hand-crafted rendering with generative AI to deliver a dramatic leap in visual realism while preserving the control artists need for creative expression." And uh, I have some doubts about that last point. Or really everything. In the blog post announcing DLSS 5, Nvidia showed off numerous examples, and all of the DLSS 5 enhanced screenshots of Grace in Resident Evil Requiem look like the kind of crap that gets made by angry fans online when they think a woman has too big a chin. According to Nvidia, here's how DLSS 5 works and also why it looks so bad. DLSS 5 takes a game’s color and motion vectors for each frame as input, and uses an AI model to infuse the scene with photoreal lighting and materials that are anchored to source 3D content and consistent from frame to frame. DLSS 5 runs in real time at up to 4K resolution for smooth, interactive gameplay. Basically, as described by Nvidia, each frame of the game is being covered in real-time with AI junk that is trying to mimic realism but ends up just creating a really off-putting image that also doesn't look at all like the original game. Previously, DLSS tech helped games run better by letting the PC render the game at a lower internal res and using AI to upscale it to something better looking. The results were often impressive. But this... this is something else and looks like someone ran Resident Evil through some cheap photo filter.
[97]
"Most People Bashing DLSS 5 Are on the Peak of Ignorance" - Veteran Game Artist Shows Exactly How Much of a Difference Lighting Can Make
To say that the reveal of NVIDIA DLSS 5 at GTC 2026 sparked a controversy on social media would be the understatement of the year. Despite strong praise from veteran tech journalists who experienced the in-person demo, such as Digital Foundry and Ryan Shrout, and the reassurance from both NVIDIA and game studios like Bethesda that the (work-in-progress) tech is fully under developer control and optional anyway, the vocal anti-AI crowd hasn't stopped condemning NVIDIA's newest DLSS addition for being "just an AI filter" and "disrespectful of the developer's original design". However, DLSS 5 also has its proponents, including veteran game artist Georgian Avasilcutei, whose industry credits include triple-A games like Remember Me and Life is Strange at DONTNOD, Dishonored 2 and Dishonored: Death of the Outsider at Arkane, and Hogwarts Legacy at Avalanche Software. In an X post, Avasilcutei attacked most of the people who are bashing NVIDIA's new technology for being ignorant of what's actually going on behind the scenes with it. He noted that, unlike what some folks seem to believe, this is no mere hallucination-prone, prompt-based generator, but instead a model that reuses the exact same information from the game to augment its lighting and shading. To prove his point, he showed a comparison picture of a character model he personally worked on in two different lighting conditions: regular rasterization and ray traced lighting. The use of more accurate lighting, as well as hair and skin shaders, produces a result that, upon careful analysis, appears to have slightly different features from the original (the nose's ridge, for one). That happens without changing the actual facial shape at all. He also added that any artist would like to see their work in the best possible lighting, but that's usually impossible due to real-time rendering limitations that NVIDIA aims to supersede with DLSS 5. This is essentially the same argument made by Bethesda's Todd Howard, who, according to Digital Foundry, said this tech makes his original vision for Starfield and Oblivion Remastered a reality. After this whole debate about DLSS 5, I came to the conclusion that most of the people talking about it are completely unaware of what they don't know...they're on the peak of ignorance and don't even grasp how little they understand. They just heard generative AI, and like Pavlov's dog, they just start drooling, thinking it's the same shit as unethical slop image generators...for the love of Christ...go and educate yourself before raging on the internet for no reason. DLLS 5 is not a prompt-based generator...it's not creating stuff based on someone else's images and hallucinates results. It uses the information from the raster to build up a final render frame with the same information but with better lighting and shading... I'll even give you an example on how much of an impact better shading and lighting has. This is a character I've worked on not long ago. On the left, you have a raster render with some bad shaders. On the right, you have a render with ray tracing on, a much better shader for both hair and skin. They don't even look like the same person...do they? This is what DLSS 5 is doing....getting a result like the one on the right(tbh a lot better) at a smaller cost than actually rendering it. Still the same geo, same textures, same light sources. Some of you will go and say the one on the left is better, and it's the artist's vision. It's not...it's just the artist's limitation due to shading and lighting constraints. Every single artist out there would love to get the right result in real time. Avasilcutei also posted a modified Dunning-Kruger effect graph where most of the people talking (negatively) about DLSS 5 are situated on the peak of "Mount Stupid", having absolute confidence but also nearly zero competence in the subject. The artist used it to underline, provocatively, that most DLSS 5 naysayers have failed to fundamentally understand the technology, stopping at a much more superficial level once they learned of AI involvement Jabs aside, Avasilcutei's argument is one that most artists and photographers are already familiar with: a human face can look dramatically different based on the angle and lighting conditions, which is why portraits have historically been crafted with special care on both accounts to bring out the best features of an individual while hiding the less conventionally attractive features. It's no coincidence that classical portrait painters developed techniques like Rembrandt lighting specifically to sculpt facial features with shadow, or that Hollywood productions employ dedicated Directors of Photography to ensure actors are always shown in their most flattering light. DLSS 5, in Avasilcutei's opinion, is simply bringing that same level of lighting craft to real-time rendering with the power of AI. Undoubtedly, there will be more reactions on both sides of the DLSS 5 debate. We'll be here to collect the most interesting takes and report them ahead of the planned Fall 2026 launch. Stay tuned.
[98]
Nvidia call DLSS 5 a "GPT moment" but internet calls it a joke
On Monday evening, Nvidia unveiled its new DLSS 5 technology, with the inclusion of AI to enhance visuals, particularly facial details, drawing the most attention - a development the company believes is the next major leap forward in the gaming world. To illustrate this, a short video was also released showing how games can be enhanced with the technology, which you can watch below. Nvidia founder and CEO Jensen Huang comments on what he believes is a giant leap forward: "Twenty-five years after NVIDIA invented the programmable shader, we are reinventing computer graphics once again. DLSS 5 is the GPT moment for graphics -- blending handcrafted rendering with generative AI to deliver a dramatic leap in visual realism while preserving the control artists need for creative expression. Computer graphics comes to life, now what did we do? We fused controllable 3D graphics, the ground truth of virtual worlds, the structured data of virtual worlds, and the generated worlds. We combined 3D graphics with generative AI and probabilistic computing. One of them is completely predictive, the other one, probabilistic yet highly realistic. The content is beautiful as well as controllable. This concept of fusing structured information and generative AI will repeat itself in one industry after another. Structured data is the foundation of trustworthy AI." While the technology is undoubtedly impressive, it would be a stretch to say the internet was impressed. On the contrary, the criticism has been massive. Nvidia's "enhanced" faces rarely resemble what the game developer originally envisioned, and unfortunately bear a striking resemblance to the typically lackluster and unimaginative AI you get when you ask Grok or ChatGPT to enhance a photo or render a beautiful person. We've scrolled through countless posts in comment sections and on social media and can attest that virtually everyone who took the time to comment is thoroughly negative about what they consider to be automated AI-slop.
[99]
"First of All, They're Wrong" - Nvidia CEO Jensen Huang Responds to DLSS 5 Backlash - IGN
Nvidia just announced DLSS 5, an upcoming iteration of its AI software suite for graphics cards at GTC 2026. However, there has been significant backlash, due to the changes it seems to make to a game's aesthetic. But Nvidia CEO Jensen Huang disagrees. At a press Q&A event at the GPU Technology Conference, Jensen Huang took a question from Tom's Hardware's Paul Alcorn about the backlash to the technology, saying that "Well, first of all, they're completely wrong," while reiterating that control over the implementation remains with the game developers. "DLSS 5 fuses controllability of the of geometry and textures and everything about the game with generative AI," Huang said. "It's not post-processing, it's not post-processing at the frame level, it's generative control at the geometry level." And while that might be the case, much of the backlash seems to be over the aesthetic changes shown off in the trailer, rather than how the technology works. In that case, it's hard to suggest that what people see with their eyes is wrong, but according to Nvidia's CEO, well, it is. We still don't know what this technology will actually look like when it launches sometime this fall, but I suspect Nvidia will be looking for ways to show it off in a more flattering light as the year marches on. I've reached out to Nvidia for comment on this, and I'll update this story if and when I hear back. Until then, we'll just have to wait and see how DLSS 5 shapes up over the next few months.
[100]
Epic Games producer says people worried DLSS 5 is "detracting from art direction" are "absolutely insane," and "you guys would be going nuts" if it was not AI
"It was painful when candlemakers were put out of business by Edison" Epic Games lead producer Jean Pierre Kellams maintains that, had you not known NVIDIA's new rendering model DLSS 5 was powered by AI, you would collapse from the euphoria of seeing Resident Evil Requiem's Grace Ashcroft look like her skin was made out of other people's skin. "I know everyone is looking at faces, but look at the leather jacket. Look at the correct lighting on the neck," Kellams argues in a series of Twitter posts discussing NVIDIA's applying DLSS 5 to Grace, who I believe has suffered enough. "This is awesome," he says. Many other people do not think DLSS 5 is awesome, including me, since the AI rendering seems to inject characters with pork fat in order to simulate what a "beautiful human woman" looks like. And I'm afraid of someone doing that to me in my sleep. But Kellams says in another Twitter post, "All you guys roasting DLSS 5 like it doesn't look better/is detracting from art direction are absolutely insane." "The lighting and shading improvements are bonkers," he continues. "If that was shown as a next-gen hardware reveal and not 'AI' you guys would be going nuts." The producer compares some people's disappointment with DLSS 5 to how "it was painful when candlemakers were put out of business by Edison." Kellams could have a point there. If I really don't think about it, it's true that Grace's "chin is slightly lighter," and her "eye socket is now accurately casting a shadow," as Kellams points out, and that these facts are equivalent to the advent of electricity. Wait, no they're not. Sorry, I think I was just operated on by DLSS 5.
[101]
NVIDIA CEO Says Gamers Are "Completely Wrong" About DLSS 5 Being 'AI Slop', Insisting Developers Have Full Artistic Control
NVIDIA's DLSS 5 showcase has been criticized by several gamers who oppose generative AI, but Jensen says those individuals are 'entirely wrong'. The unveiling of DLSS 5 at this year's GTC was a complete shocker, given the event's enterprise focus, but it appears Jensen decided to throw in the surprise. Leveraging neural rendering, DLSS 5 enhances the title's visuals by running them through an 'AI layer,' aiming to create hyper-realistic scenarios. While many gamers were surprised by the quality improvements with DLSS 5, a decent chunk of the gaming community was against NVIDIA's efforts to produce what they call 'AI slop' with the new upscaling technology, but Jensen has an answer for them. Well, first of all, they're completely wrong. The reason for that is because, as I have explained very carefully, DLSS 5 fuses controllability of the geometry and textures and everything about the game with generative AI. It's not post-processing, it's not post-processing at the frame level, it's generative control at the geometry level. - NVIDIA's CEO Jensen Huang NVIDIA's CEO's answer targets the perception that DLSS 5 is merely a filter on visual quality, and instead argues that the technology is integrated into the 3D engine's pipeline, enabling the creation of data he calls "generative control". At the same time, DLSS 5 also ensures that the AI layer involved doesn't do 'guess work' with visual enhancements; it takes structured data, such as the 3D skeleton (geometry), the way the character moves (motion vectors), and the depth of the scene, and then churn out textures that resembles realistic scenarios, instead of the 'AI slop' narrative out there. Another interesting emphasis by Jensen when responding to DLSS 5 criticism was that developers are in control of how eager they are to integrate the upscaling technology into their titles, which is why he calls it "content-control generative AI". NVIDIA's CEO previously reiterated that DLSS 5 is the 'ChatGPT moment' of upscaling, where the input prompt comes from developers feeding structured data to generate a realistic output. There's no doubt that DLSS 5 is a significant advancement in rendering, but the primary source of criticism is the way the originality of textures changes significantly once AI is introduced. It is up to the developers to ensure that the 'artistic' intent remains with DLSS 5, and the way to do that is to feed in structured content that best suits it.
[102]
Nvidia's DLSS 5 is a Slap in the Face to the Art of Video Game Design - IGN
So, Nvidia just revealed DLSS 5, its new AI graphics tech that uses generative systems to "enhance" video games with more photo-realistic effects, and I'm not going to worry about mincing my words here: I think it looks shit. Yes, we've barely seen a minute of it in action, but if what's teased is where technology giants think the future of graphics tech in games is going, then I'm afraid I'm out. The first shot of Nvidia's DLSS 5 announcement trailer gives us a good look at the impact the technology has on Capcom's latest, Resident Evil Requiem. It's already a stunning-looking video game, so I can't say I ever felt it was in need of enhancements, but lo and behold, as that green bar sweeps across the screen, a yassified Grace Ashcroft is left staring back at us, devoid of any discernible character, as if the light behind her eyes has been switched off by the technology. It's the sort of smoothed-over face and unrealistic lighting that we've become accustomed to seeing in the corners of the app store, or on the advertising banners of websites you'd only look at in incognito mode. It takes a character so carefully crafted by the art team at Capcom, and says "no, we can do that better," adding a layer of sheen that makes Grace stand out in Requiem's world, rather than feel a part of it. Not once on my playthrough of Resident Evil Requiem did I think that it didn't look photo-realistic enough or that either of its two protagonists was in need of a glow-up, and I have definitely never played a match of EA FC 26 where I wished that Virgil Van Dijk looked less like his real-life counterpart. I play games to experience a crafted piece of artistry, whether the developers are aiming to take me to far-flung fantasy worlds, or recreate real-life with as much fidelity as possible. But DLSS 5 offers none of this to me, instead replacing the paintbrush held in a human hand with an AI slopping a big vat of oil over the canvas. What are we doing here? AI has no artistic or authorial intent. What it does is read an image as if it were purely zeros and ones and overwrites it according to its training data, theoretically in an attempt to make it look "better". On Nvidia's accompanying blog post, the company explains that the model is trained to "end-to-end understand complex scene semantics such as characters, hair, fabric and translucent skin, along with environmental lighting conditions like front-lit, back-lit or overcast - all by analyzing a single frame." The idea, in theory, is to improve the look of characters while also keeping them grounded in the scene they already stand in. The results are just off-putting to my eye, though. Each one of the Hogwarts Legacy characters in the trailer looks like they're now spotlit from behind the camera in an off-putting way that by no means looks natural. Yes, we now live in a world where game environments are largely dynamically lit, but the developers and technical artists behind those systems still have ultimate control over how they look. They can decide the mood they're trying to set and will spend much of their time making sure it fits the game's vision, but Nvidia and the tech behind this AI filter obviously think it knows better. Art direction is such a huge part of video game design. The worlds and characters that these developers spend years handcrafting are what root us in the experience. I very recently started a replay of Uncharted 4, and it still strikes me in this nearly decade-old game how incredibly nuanced Nathan Drake's face is during its cutscenes. There are little wrinkles, bashes, and bruises that come and go throughout its story that reflect his place in the world and the struggles he's going through. I couldn't imagine ever wanting an AI filter layered on top, that would no doubt smooth over his wrinkles and remove his blemishes, recalibrating Naughty Dog's flawed hero to better reflect the "perfect" men that are promoted by society and thus flood its training data. But cuts, scuffs, and genetic "imperfections" are the small details that make us connect to characters, and are the intent of the artist who made them. The technology behind DLSS 5 threatens not only to make games visibly distracting but also completely alter the emotions of a story if it is embraced by the corporations that employ these artists. I can only imagine the collective sigh let out by the majority of video game developers when this trailer was released, but fear that the people in charge of the money may have let out a little smile instead. This feels like the dawn of a new era, and a saga that will stretch on far beyond Nvidia's announcement this week. Already we're seeing pushback from fans slamming DLSS 5 as "AI-generated slop," and Bethesda has quickly committed to "further adjusting the lighting and final effect" of Starfield's implementation of the tech after it showed multiple characters suffering from the same smooth-faced fate in its space RPG. "This will all be under our artists' control, and totally optional for players", Bethesda Game Studios added on social media. It may well be completely optional for now on existing games, but I fear for what happens when studios start having to use this tech more in step with the development process. If we allow this sort of technology to thrive, are we giving the go-ahead for companies to place less importance on curated art direction and instead do the bare minimum and let AI fill in the gaps? I don't know about you, but I like my art to be made by humans. I want to know if someone decided to light a scene in a certain way or if the small details on a character's face were sculpted with intention. So I'll continue to say that visual "upgrades" like this look like shit -- it's not like the tech behind it has any feelings to hurt anyway.
[103]
There's upscaling, and then there's straight-up changing a game's art direction, and your gaming hardware should only do one of those things
"Game developers have full, detailed artistic control over DLSS 5's effects" One of my favorite games of all time is Dishonored. Ignoring its amazing sandbox gameplay, exceptional level design, and barrels of charm that make it a joy to play more than 10 years after its release, its art style is one of the things that make it feel truly unique. With the announcement from GTC that DLSS 5 is going to be coming to some of the best graphics cards this fall, I'm very afraid of what it means for games like Dishonored. DLSS 5 is being met with criticism at every turn so far. It looks like it's going five steps beyond the upscaling we've seen from gaming PC hardware until now, and not in a good way. In fact, it looks like an undeniable generative AI do-over that fundamentally changes the way in-game characters look. Until now, AI upscaling has been about providing gamers with higher frame rates, but while making games smoother is one thing, trying to make them look "better" is something new entirely. If anything, toggling DLSS on makes your games look worse, but that's the price we've all accepted to unlock better frame rates in an era where game optimization isn't exactly at its peak. Nvidia's been quick to try and get out ahead of my biggest concern with DLSS 5 - the destruction of art direction: "Important to note with this technology advance - game developers have full, detailed artistic control over DLSS 5's effects to ensure they maintain their game's unique aesthetic," Nvidia commented on its own YouTube reveal trailer. "The SDK includes things like intensity, color grading and masking off places where the effect shouldn't be applied. It's not a filter - DLSS 5 inputs the game's color and motion vectors for each frame into the model, anchoring the output in the source 3D content." But to be honest, I'm left asking why they'd want to implement any of the changes shown in this trailer. I don't know about you, but there hasn't been a moment in the last decade of gaming when I've thought, "Man, I wish all my video games looked more photorealistic." Art direction is, in my opinion, the way the soul of a game comes across. Like its sound design and distinct-feeling gameplay, it's what really makes a game feel unique. I love observing the subtleties between the character models in The Last of Us compared to Marvel's Spider-Man. Both have completely different approaches to character animation, but both are capable of telling an excellent story with their chosen direction. You can look at franchises that have been around for ages. Resident Evil, Mario, Zelda, and even Grand Theft Auto - they've all found ways of modernizing their art styles without losing the soul of how their games looked back in the day. With the explosion of the indie game scene, I've loved that we celebrate any and all art styles today. "Game of the Year" isn't automatically the one with the most "realistic" graphics; that award can go to anything because it's about more than just crafting a lifelike look. And I think trying to apply a blanket statement to all of these art styles by saying DLSS can now make them look "better" doesn't work. All of those unique bits of visual identity, the way faces look, the way lighting is directed at certain points of the environment, and the way the aesthetics come together have all been put there for a reason. They are, for want of a less blatant term, artistic choices. Even if game developers have control over how it's implemented, DLSS 5 seems to disrupt a lot of that direction. DLSS 5 seems to be pushing for all games to look photorealistic. Starfield, Resident Evil Requiem, EA Sports UFC, all the games shown in the trailer end up looking exactly the same, their subtle artstyle differences removed in the name of... something? Ignoring completely that it looks like AI-generated slop, all of the hard work of character and environment artists, all the technical feats of animating facial meshes, all of the detail brought over from motion capture - with DLSS 5, it all fades into the background as Nvidia's new form of upscaling takes over. There's been so much controversy around any and all use of generative AI in game development lately, but with DLSS support being such a widespread tool in so many games today, I do worry about how this is all going to be implemented. Nvidia says that game devs will have full, detailed artistic control over it, but to what extent? Is DLSS 5 a special opt-in, or will allowing support for DLSS 4.5 for performance boost reasons also mean they have to submit to DLSS 5's visual substitutes? For the record, this sort of question mark is why I've been wishing that there was less of an emphasis on AI upscaling within modern graphics cards. In my eyes, hardware shouldn't be so reliant on software to make it work to its full capabilities. DLSS isn't something you own and therefore control when you purchase a bit of hardware, so how much do you really own and control your graphics card if it's reliant on upscaling that can change after the purchase has been made? If DLSS 5 causes enough controversy, will game devs stop using it? Will support for it within games then become less widespread? If so, your RTX 50 Series GPU might not be the future-proofed purchase we thought it was. I'm getting a bit carried away with a potential future, but it's because I'm not sure I like the direction Nvidia is going in right now. It was only a few weeks ago that the brand's CEO said his company "created the modern video game industry". With DLSS 5, I hate to contradict Nvidia's comment on its own video, but it looks like it now wants to recreate it in its own image. Take a look at the best CPUs for gaming, the best gaming PCs in the UK, and the best RAM for gaming.
[104]
"If DLSS 5 Was Shown as a Next-Gen Hardware Reveal and not AI, You Guys Would Be Going Nuts" - Game Dev Hits Back at Anti-AI Crowd
Predictably, despite the lavish praise from Digital Foundry experts and NVIDIA's reassurance that DLSS 5 would be tunable by game developers to respect their artistic intent, the explosive announcement of the new technology during CEO Jensen Huang's GTC 2026 keynote has already attracted a vocal anti-AI crowd that has since bashed DLSS 5 heavily on social media for being "an AI filter", "slop", and for going against the work of human artists. However, the tech also has plenty of supporters. One of them is JP (Jean Pierre) Kellams, a veteran developer who started his career working on localization and game writing at CAPCOM on titles like Devil May Cry 4, GODHAND, and Bionic Commando. He then joined PlatinumGames at its founding and worked as a writer and music coordinator on titles such as Bayonetta, MadWorld, Vanquish, Anarchy Reigns, Metal Gear Rising: Revengeance, The Wonderful 101, and Bayonetta 2 before being promoted to lead producer on the ill-fated Scalebound. Following Scalebound's cancellation, he returned to the US to work on EA's Madden team in Orlando from 2017 to 2021, serving as development director and producer. For the last five years, he's been a lead producer on the Harmonix team (which was acquired by Epic Games in late 2021) to help "develop musical journeys and gameplay for Fortnite, starting with Fortnite Festival." Kellams didn't mince words in his tweeted response to the anti-DLSS 5 folks: All you guys roasting DLSS 5 like it doesn't look better/is detracting from art direction are absolutely insane. The lighting and shading improvements are bonkers. If that was shown as a next-gen hardware reveal and not "AI" you guys would be going nuts like the Watch Dogs demo. I get that some very vocal people don't like AI. But guess what. Technology doesn't care if you like it. It is a tool. AI isn't coming. It is here. Just this morning, my oncologist was telling me all the ways it is helping cancer treatment and research. Kellams then highlighted that AI-based technologies could enable marginalized creators to make their dreams a reality. He admitted that the transition is likely to be painful for some people, but likened it to similar technological transitions in the past, like going from candles to electricity, from landlines to cellphones, or from mail to e-mail. Kellams is not the only one. There's another renowned voice in the tech industry who has spoken in favor of DLSS 5 on X with a nuanced take: tech journalist Ryan Shrout, formerly the founder and president of PC Perspective, then the senior director of gaming at Intel, and, more recently, the president of Signal65, a technical marketing and competitive analysis company affiliated with Six Five Media and the Futurum Group. Shrout, like Digital Foundry, watched the DLSS 5 demo in-person and pointed out that while some people's first reaction to the "new" faces may be understandable due to a psychological effect, the improvements are much broader than that. The visual improvements are significant. Not incremental. Significant. But if you've been scrolling social media, you'd think NVIDIA just shipped an Instagram beauty filter for video games. And I get why that's the first reaction. But it misses the true picture by a wide margin. [...] I've probably seen ten different "floating head" tech demos over the course of my career. That's not an exaggeration. They're always a single head with no hair, no body, no environment, because rendering a photorealistic face at that level of quality is so expensive that it can only be done in isolation. You never see it inside an actual game, because the performance budget won't allow it. DLSS 5 closes that gap in a pretty dramatic way. And because that's the area where the delta between "before" and "after" is most visible, that's what everyone is reacting to. The NVIDIA team put it well during my demo. It's a psychological effect. You've seen environments rendered really well before. When you suddenly see a character rendered at that same photorealistic level, your brain flags it immediately. It stands out. Fair enough. But focusing only on the faces is wrong. In Starfield, there's a countertop scene with a coffee machine, some paper towels, a cup, napkin holders. Standard environmental clutter. With DLSS 5 off, everything looks flat. The coffee maker fades into the background. Toggle it on, and suddenly the objects have shape. The lighting wraps around them naturally. The same thing played out across every title. In Oblivion Remastered, the water went from good video game water to something that could pass for real, with the kind of light interaction and shimmer you'd expect from an offline render. In Assassin's Creed Shadows, the trees and distant foliage gained dramatically better depth and separation in how light moved from the canopy down through the branches. Shrout stressed that it's not a filter but a much more complex unified model capable of scanning a frame, recognizing the game's assets, and processing them based on how light should behave when interacting with them. He also noted the granular level of control developers will be able to exercise to ensure the game looks like what they want it to look, thanks to spatial masking, color grading controls, etc. Ultimately, he closed his article with this statement: The early social media reaction is predictable. New technology that changes how games look will always generate strong opinions, especially when AI is involved. But the knee-jerk "it's just a face filter" take doesn't hold up once you've actually seen the full scope of what DLSS 5 is doing across an entire scene, across multiple games, in real time. Go look at a coffee maker. Go look at stone textures. Go look at the way light passes through a leaf. That's where the real story is. No doubt, there will be many more reactions from both sides of the argument to the reveal of this groundbreaking technology. Stay tuned on Wccftech for more as we report it, and share your own opinions in the comments below.
[105]
'This Will All Be Under Our Artists' Control': Bethesda Commits to 'Further Adjusting' DLSS 5 Use in Starfield Following 'AI Slop' Backlash - IGN
Bethesda has committed to "further adjusting the lighting and final effect" of Nvidia's controversial DLSS 5 visuals on Starfield, after fans strongly criticized the game's AI-like faces when the controversial new technology was applied. Nvidia announced DLSS 5 yesterday, and dubbed "the GPT moment for graphics." (For more about DLSS and why it matters to gaming, check out IGN's handy guide). As part of the announcement, Nvidia acknowledged the challenges that AI video models have faced in the past, but claimed its solution was to tie the model to the color and motion vector data taken from the game engine - just like with Frame Generation - to keep the output grounded in the original scene. The model is then trained to "end-to-end understand complex scene semantics such as characters, hair, fabric and translucent skin, along with environmental lighting conditions like front-lit, back-lit or overcast - all by analyzing a single frame," then use that information to generate images. However, despite comments from Bethesda boss Todd Howard hailing DLSS 5's impact as "amazing," the results have failed to impress some fans, who have hit back at the photorealistic lighting and facial details, forcing Bethesda to seemingly walk back some of that excitement soon afterwards. "Appreciate your excitement and analysis of the new DLSS 5 lighting here. This is a very early look, and our art teams will be further adjusting the lighting and final effect to look the way we think works best for each game," the company said. It also stressed that "this will all be under our artists' control, and totally optional for players." The latter comment stems from complaints from players that DLSS 5 essentially overlays AI-generated graphics over the original, with some calling it "an insult to your own artists" and pleading, "please don't subject your art teams to this." DLSS debuted back in 2018 with the RTX 2080, and it was initially just the Deep Learning Super Sampling that it's named after. The idea was to take a game, render it at a lower resolution, and then upscale it using AI back to your native resolution. DLSS has evolved a lot in the years since, adding features like Frame Generation, Reflex, and now its AI model that injects new lighting and materials to make the scene more "photorealistic." Opinion on the use of AI in games continues to divide studios and their fans, with some vehemently against its use, while others claim it's an inevitable part of the future. Rockstar co-founder and former Grand Theft Auto writer Dan Houser recently likened AI to mad cow disease, but the CEO of Genvid -- the company behind choose-your-own-adventure interactive series like Silent Hill Ascension -- has claimed "consumers generally do not care" about generative AI in games, and stated that: "Gen Z loves AI slop."
[106]
"We need to push back harder": Nvidia's DLSS 5 "AI slop" filter is being torn apart by industry veterans from Baldur's Gate 3, Palworld, and many more
DLSS 5 from Nvidia was revealed yesterday, in a video demonstrating several recent releases getting upscaled by the "real-time neural rendering model," including Resident Evil: Requiem and Starfield. It's essentially an AI upscaler, with the results being rife with that weird, uncanny feel generative content has, and some developers are voicing their concerns. "This DLSS 5 AI dogshit is actually depressing man," Dave Oshry, co-founder and CEO of New Blood Interactive, posted on Twitter. "Even worse is that a whole generation is growing up who won't even know this looks 'bad' or 'wrong' because to them it'll be normal." Besides making his opinion clear, he adds a rallying call: "We need to push back harder against it." Amid the onslaught of memes clowning on the software's 'realism', other members of the industry are in agreement. "Watching my daughter grow up where AI slop is the norm and her not knowing quality of quantity really bums me out," Arman Nouri, senior environment artist at Epic Games, tells Oshry in his replies. John Buckley, the head of publishing and communications at Pocketpair, posted, "One step closer to DLSS 6," with a popular - and crass - clip from Star Trek spoof Star Trash, involving a particular use of ultra-realistic holograms. We create the future. Meanwhile, Michael Douse, director of publishing at Larian Studios, now believes the latest Resi has been ruined by the anticipation of this upgrade. "Playing some Resident Evil Requiem this morning," he jokes on Twitter. "The game is great but I just wish it infused pixels with photoreal lighting and materials, bridging the gap between rendering and reality alas unplayable for now." Please pour one out for his current playing experience. In all seriousness, Requiem's already a game that looks incredibly impressive on contemporary hardware, and there's a strong argument for the idea DLSS 5 is just detracting from the art direction in favor of whatever it's trying to accomplish. Given the pushback and division, we'll see if Nvidia decides to pivot any. I wouldn't hold my breath.
[107]
"I Don't Think I've Seen a Demo Quite as Astonishing as DLSS 5 for Quite Some Time" Says Digital Foundry Founder
NVIDIA dropped a massive bomb at GTC 2026: the announcement of DLSS 5, which left almost everyone surprised, except for the folks at Digital Foundry, who were already able to check out this new technology. Richard Leadbetter and Oliver McKenzie discussed it in a first-look hands-on preview and, by and large, waxed praise upon it. I've selected some of the strongest quotes from their DLSS 5 impressions video: Yet even their highly positive video is already filled with commenters suggesting that DLSS 5 is "just an AI filter" and that it is "disrupting the original art style". Now, as Richard points out in the video, the only major aesthetic difference is in one shot of Grace Ashcroft from Resident Evil Requiem, where her face looks, admittedly, quite different from the original game. In all other cases shown so far, from Starfield to EA Sports FC and even another shot of Grace from CAPCOM's game (as seen in the comparison picture below), the art style appears to be largely preserved, except that it's received a massive, generational boost in detail. Indeed, Digital Foundry confirms that DLSS 5 retains the game's original geometry in its entirety and simply passes it through its fine-tuned AI model. That said, with several months still left before the technology's planned Fall 2026 debut, there's room for improvement, as NVIDIA itself admitted, calling this demo a "snapshot of current development for the model" with plenty of tuning left to do. Furthermore, DLSS 5 is directly integrated into the NVIDIA Streamline SDK, allowing developers to use the detailed controls for intensity, colour grading, and masking to determine where and how enhancements are applied, helping them to maintain each game's unique aesthetic. Indeed, many developers are already on board with it, including the following who have already provided official statements: Todd Howard, studio head and executive producer at Bethesda Game Studios: Bethesda has such a rich history pushing graphics with NVIDIA, going all the way back to Morrowind, with that incredible water. When NVIDIA showed us DLSS 5 and we got it running in Starfield, it was amazing how it brought it to life. We've played it. We can't wait for all of you to do so as well. Jun Takeuchi, executive producer and executive corporate officer at CAPCOM: At CAPCOM, we strive to create experiences that feel cinematic, compelling and deeply believable -- where every shadow, texture and ray of light is crafted with intention to enhance atmosphere and emotional impact. DLSS 5 represents another important step in pushing visual fidelity forward, helping players become even more immersed in the world of Resident Evil." Charlie Guillemot, co-CEO of Vantage Studios: Immersion is about making the world feel real. DLSS 5 is a real step towards that goal. The way it renders lighting, materials and characters changes what we can promise to players. On Assassin's Creed Shadows, it's letting us build the kind of worlds we've always wanted to. We'll definitely cover a lot more about the new DLSS as we get closer to its launch. Stay tuned!
[108]
Bethesda says Nvidia's controversial new DLSS 5 AI filter "will all be under our artists' control, and totally optional for players"
"This is a very early look, and our art teams will be further adjusting the lighting and final effect" Bethesda Game Studios has posted official comment on Nvidia's controversial announcement of DLSS 5, a new lighting filter that seems to apply a blatantly AI-generated filter over games. Seemingly in attempt to cool down the temperature following DLSS 5's reveal, Bethesda issued a response to Digital Foundry's analysis, which was published the day of the announcement and largely praises the new tech. "Appreciate your excitement and analysis of the new DLSS 5 lighting here," reads a reply from the official Bethesda Game Studios Twitter (X) account. While Digital Foundry seems by and large impressed by DLSS 5, Bethesda appears to be aware that the vast majority of comments on the media company's video are not positive. "This is a very early look, and our art teams will be further adjusting the lighting and final effect to look the way we think works best for each game," reads Bethesda's reply. "This will all be under our artists' control, and totally optional for players." Make no mistake, Bethesda is doing damage control here. Studio head Todd Howard was among several high-profile gaming executives gassing up the tech this morning, but it seems that no matter how many industry leaders tell the gaming community that AI-generated filters are a good thing, people still like games to stay faithful to artistic intent. It seems Bethesda's official comment is aimed squarely at that sticking point, although the exact involvement of studio artists, character designers, and other developers in DLSS 5 remains unclear. The controversy here stems particularly from the use of these gen-AI filters over characters' faces. The announcement video showcases the effects of DLSS 5's "real-time neural rendering model" on games including Resident Evil Requiem, Starfield, and Hogwarts Legacy, with the very first shot revealing a distinctly AI-looking Grace Ashcroft. While other character models, coincidentally including Grace's Resident Evil Requiem co-lead Leon Kennedy, aren't as conspicuous, I suspect DLSS 5 will continue to divide players so long as it threatens to change the look of characters so dramatically.
[109]
Nvidia CEO Jensen Huang on DLSS 5: I don't love AI slop myself
A couple of days back, Nvidia's DLSS 5 reveal trailer got the entire gaming community talking and received mixed reactions. While some people expressed their excitement for the new technology, others plainly called it an "AI slop" filter being put on their favourite games. Now, Nvidia CEO Jensen Huang is defending DLSS 5, saying that he himself hates AI slop and that isn't what DLSS 5 is about. Read on to know more. Also read: Ada Wong could be a part of Resident Evil Requiem DLC Huang was speaking on the Lex Fridman podcast about AI when he made the remark. During the conversation, the topic briefly shifted to gaming, specifically around DLSS 5. As mentioned already, some gamers and developers feel that using AI in graphics could harm creativity and make games look too similar. And Huang actually agreed with part of that concern. 'I think their perspective makes sense. And I could see where they're coming from, because I don't like AI slop myself,' he said and added, 'You know, all of the AI-generated content increasingly looks similar and they're all beautiful, so I'm empathetic towards what they're thinking.' But does that mean Huang is indicating that Nvidia will roll back the tech? Not really. He then clarified that DLSS 5 is designed to work with the game's existing data. It uses 3D information from the game world, like geometry and lighting, to improve visuals. In simple words, it enhances what is already there instead of changing how the game looks. He also said developers will have control. Since the model is open, studios can tweak or train it to match their artistic style. Huang also stressed on the fact that if they (game devs) don't like DLSS 5, they can simply choose to not use it. DLSS 5 is not coming anytime soon though. It is expected to be launched later this year. And by then, the final product might be even more refined. For the unversed, DLSS 5 was introduced last week with a short video shared on Nvidia's official YouTube handle. The video showed stills from some popular titles like Resident Evil Requiem, Hogwarts Legacy and Starfield. In each of these cases, the video showed what the games would look like with the DLSS 5 on. The video received mixed responses and left the internet divided.
[110]
Nvidia DLSS 5 announced, claims to make your games look more realistic
Have you ever wondered how companies improve in game graphics? What do they actually do to make virtual worlds look closer to reality? The answer usually comes down to a mix of better hardware and smarter rendering techniques like ray tracing. But even then, there has always been a visible gap between real time game graphics and movie quality visuals. Nvidia now wants to shrink that gap further. At GTC 2026 that was held recently, the company announced DLSS 5, a new AI driven technology that focuses on making games look more realistic, not just run faster. And according to the videos and photos that Nvidia has shared, it promises to make your games look insane. Also read: Pokemon Go users unknowingly created dataset of 30 billion images that was used to train delivery robots Earlier this year, Nvidia announced the arrival of DLSS 4.5 which promises to give players higher frame rates. And now, the company seems focused on making your games look better as well. With DLSS 5, Nvidia is taking a different approach to how frames are created. Instead of simply rendering every detail using traditional techniques, the system uses a neural rendering model that enhances each frame in real time. It takes inputs like colour and motion data from the game and then uses AI to improve lighting, materials and overall scene quality. The result, at least on paper, is visuals that look far more lifelike. Nvidia says the model understands complex elements like skin, hair and fabric, along with lighting conditions such as backlit or overcast scenes. It then applies enhancements that stay consistent across frames, which is critical for games where visual stability matters. This also allows effects like realistic skin translucency and accurate light interaction with different surfaces, something that usually needs heavy rendering. It is also important to note that all of this runs in real time at up to 4K resolution. That is significant because achieving similar levels of detail in film production can take minutes or even hours per frame. Nvidia is essentially trying to compress that level of visual quality into milliseconds. The company is also giving developers control over how these enhancements are applied, so games can maintain their artistic style instead of looking overly processed. Now, you might already know that DLSS started out in 2018 as a way to improve performance by upscaling images using AI. Over time, it evolved to include features like frame generation. And with DLSS 4.5, Nvidia pushed this further by allowing AI to generate most of the pixels seen on screen, which led to a boost in frame rates in demanding games. DLSS 5 builds on that foundation but shifts the focus towards visual quality. Nvidia is calling it the biggest leap in graphics since real time ray tracing in 2018. Nvidia CEO Jensen Huang described it as a "GPT moment" for graphics, highlighting how it blends traditional rendering with generative AI while still keeping control in the hands of developers. "Twenty-five years after NVIDIA invented the programmable shader, we are reinventing computer graphics once again. DLSS 5 is the GPT moment for graphics -- blending handcrafted rendering with generative AI to deliver a dramatic leap in visual realism while preserving the control artists need for creative expression," Huang said according to a press release. The company has also confirmed support from major publishers and studios including Bethesda, Capcom, Ubisoft and Warner Bros. Early reactions from developers suggest strong interest. Bethesda says DLSS 5 made Starfield feel more alive, while Capcom believes it will help deliver more cinematic experiences in titles like Resident Evil. Ubisoft's teams have also hinted that it allows them to build more realistic worlds than before. DLSS 5 is set to arrive in Fall this year as per Nvidia. Moreover, it will be supported in some recent titles such as Assassin's Creed Shadows, Resident Evil Requiem, and more. If adoption picks up the way DLSS has in the past, this could be one of the more important shifts in how games look and feel in the coming years.
Share
Share
Copy Link
Nvidia unveiled DLSS 5, its latest AI upscaling technology using generative AI to enhance game graphics with photorealistic lighting and textures. But the demonstration sparked immediate backlash from gamers and developers who criticized the uncanny valley effect, comparing it to AI slop and Instagram filters that undermine artistic vision.
Nvidia's reveal of DLSS 5 at its GTC conference this week has ignited a firestorm of criticism from both gamers and developers, marking a dramatic departure from the generally positive reception of previous versions. The company's CEO Jensen Huang described the technology as "the company's most significant breakthrough in computer graphics since the debut of real-time ray tracing in 2018," but the gaming community saw something else entirely: AI slop that strips away artistic intent and creates an uncanny valley effect
1
.Unlike earlier iterations that focused on improving frame rates through upscaling, DLSS 5 represents what Nvidia calls a "real-time neural rendering model" that uses generative AI to deliver photorealism previously only achieved in Hollywood visual effects [2](https://arstechnica.com/gaming/2026/03/gamers-react-with-overwhelming- disgust-to-dlss-5s-generative-ai-glow-ups/). The AI upscaling technology analyzes a game's internal color and motion vectors to understand complex scene semantics including characters, hair, fabric, and environmental lighting conditions, then applies what Nvidia describes as photoreal lighting and materials
2
.The negative reaction from gamers was swift and overwhelming. Demonstrations showing Capcom's Resident Evil Requiem, Ubisoft's Assassin's Creed, and Bethesda's Starfield revealed faces transformed into overly detailed, unnaturally smooth versions with altered features—larger eyes, fuller lips, and completely different noses
3
. Critics compared the effect to motion smoothing for video games, but worse, with characters looking "yassified" or sporting "porn faces" reminiscent of Instagram and Snapchat glamour filters5
.
Source: TweakTown
Developers joined the chorus of criticism. Thomas Was Alone creator Mike Bithell said the technology seems designed "for when you absolutely, positively, don't want any art direction in your gaming experience"
2
. Gunfire Games Senior Concept Artist Jeff Talbot stated that "in every shot the art direction was taken away for the senseless addition of 'details'"2
. James Brady, who worked on Call of Duty: Modern Warfare 3, argued it "devalues an artist's creativity and intent on a basic level"3
.In a nearly two-hour interview with the Lex Fridman Podcast published Monday, Jensen Huang attempted damage control, acknowledging he could "see where they're coming from, because I don't love AI slop myself"
1
.
Source: Wccftech
Huang argued that DLSS 5 differs fundamentally because it "is 3D conditioned, 3D guided," with artists creating the ground truth structure that the system enhances without changing
1
.Huang emphasized that DLSS 5 "is integrated with the artist" and gives them "the tool of generative AI" with the ability to train models for specific looks or prompt the system with descriptions like "I want it to be a toon shader"
1
. In a Q&A session, Huang called gamers "completely wrong" and underlined that the technology "doesn't change the artistic control"4
.Related Stories
Beyond aesthetic concerns, the demonstration revealed technical problems. Artifacts appeared in real-time rendering, including a scene from a FIFA game where a soccer ball displayed pieces of the net on its surface before actually entering the goal
3
.
Source: Wccftech
The demo currently requires two RTX 5090s, with one dedicated entirely to running DLSS 5
2
.Gamers worry that if this technology becomes standard, video game graphics will lose their unique visual identity and converge toward a single homogenized look. New Blood Interactive founder Dave Oshry warned that future generations "won't even know this looks 'bad' or 'wrong' because to them it'll be normal"
2
. The concern centers on AI-generated faces as amalgamations of countless images that produce unnaturally smooth skin, uniform features, and synthetic-looking hair—telltale signs that have become synonymous with AI-generated content across social media5
.Despite the backlash, Nvidia announced partnerships with major publishers including Bethesda, Capcom, NetEase, NCSoft, Tencent, Ubisoft, and Warner Bros. Games
1
. Bethesda responded on social media clarifying that demonstrations represent "a very early look" and that art teams will adjust lighting and effects, with everything remaining "under our artists' control, and totally optional for players"4
.The technology has already become a meme format, with "DLSS 5 On" serving as visual shorthand for "overly cleaned up" or "mangled beyond recognition"
2
. Kevin Bates, CEO of Arduboy, acknowledged that "from a technical standpoint, it's quite an achievement" and expressed surprise that Nvidia expects to distill the capability to run on a single graphics card by the fall launch3
. With months remaining until DLSS 5's autumn debut, Nvidia faces the challenge of rebuilding trust with a skeptical gaming public that has made its position clear through overwhelming social media criticism1
.Summarized by
Navi
[1]
17 Mar 2026•Technology

25 Mar 2026•Technology

08 Jan 2025•Technology
