2 Sources
2 Sources
[1]
Generative AI in Gaming Is Here, but Facing Pushback from Gamers -- and Developers
Expertise Smartphones | Gaming | Telecom industry | Mobile semiconductors | Mobile gaming This past week, Nvidia unveiled its new graphics upscaling technology, DLSS 5, with a new feature that gave in-game character models AI makeovers. Their drastically different appearances, which made them look like the "yassified" style popular in cheap mobile games, drew a public backlash -- not just out of revulsion for their appearance, but because it would change the work that game developers labored over without their input. Gamers are rebelling over the use of generative AI in the games they play, especially when it isn't disclosed. That makes it tricky to use, whether to whip up code and art while making games or in player-facing materials like generating nonplayer character dialogue in real time in response to your choices. Back in January, planners for the Game Developers Conference released their annual state of the games industry report for 2026, in which 52% of respondents reported that generative AI was used at their company, though only 36% said they're using it as part of their jobs, with some saying it's optional, at least for now. They mostly use the technology for research and brainstorming (81% of respondents), writing emails and scheduling (47%) or for code assistance (47%), among other tasks. But developers themselves are increasingly skeptical of generative AI, with 52% responding that it's bad for the industry -- up from 30% last year. By the time the show rolled around in the middle of March, uncertainty around generative AI permeated GDC, held in San Francisco. Like most years, the professional convention was a nexus for members of the games industry to share lessons, make deals and forecast the next year in gaming. But as I walked around the halls of GDC 2026, I saw a stark juxtaposition: a handful of smaller games proudly using generative AI, and relative silence from the rest of the industry. It's still very early days for the use of gen AI in video games. At prior GDCs, I'd seen primitive gen AI-powered NPCs running on Nvidia tech and Microsoft explaining how its Copilot tech would provide in-game tips and advice, but neither of these player-facing applications has really arrived in any big game in 2026, or even debuted at the margins. If there were a killer gen AI application that made its use essential in production or in gameplay, we'd have probably heard about it. Or as Chris Hays, lead services programmer at id Software, put it, gen AI isn't nearly as transformative as the true paradigm-shifting tech we've seen before. "People weren't begging people to use the web when it came out. If [generative AI] was really as revolutionary as the web, people would be using it," Hays said. I sat down with Hays, who is also a lead organizer at id Software's Big Friendly Union, and Sherveen Uduwana, treasurer at the United Videogame Workers union (which made its public debut at last year's GDC 2025), to chat about the state of the games industry, including how much generative AI is being used by developers. Between the two, the consensus was: not much. What they'd heard from the cases where it's used in development, humans had to step in and amend AI-created errors. "I'm skeptical, even for the studios that say, 'We're implementing AI into the process.' We're not seeing the number of revisions that are happening after these AI-generated content, where essentially a worker is going in and fixing all these mistakes to the point that it possibly could have been done without the AI in the first place," Uduwana said. Amusingly, Hays said, freelancers he's talked to have loved the AI push -- as they're hired to come in and fix AI's mistakes. I'd chatted with Hays and Uduwana at the Communication Workers of America booth on the GDC show floor beneath the Moscone Center's North Hall. (Disclosure: One of the CWA's member unions, the NewsGuild, represents editorial workers at CNET. Until recently, I was a member.) A few hundred feet away, I walked into the Google booth, where the tech giant was showing off ways that its Gemini gen AI-powered assistant could be used in games -- including some that were set to launch. Google's sizable space held a handful of internally built demos showcasing how one could use Gemini in their game. They were pretty rudimentary. In one, a Google employee demonstrated how players could talk their way, ChatGPT-style, through a village and order a drink at a tavern in another example of gen AI-powered NPC conversations. I got hands-on with another demo, walking around a server farm, shooting robots while an assistant kept up a constant flow of commentary, Zelda fairy-style, relative to my performance -- even healing me if I took too much damage, like a reactive easy mode. But next to these were actual games purportedly coming out soon. I saw one, a strategy game for phones called Colony by Parallel Studios, that's aiming for a release in the next three months, and that lets players oversee and defend a settlement on a distant world. As Game Director Andrew Veen told me, Colony uses Gemini-powered large language models in two ways. First, to let players solve in-game challenges with suggestions that the AI judges -- for instance, to thaw a frozen power core, players have tried to use bombs, flamethrowers, napalm and even pickaxes (which have all worked). Second, Colony uses a Gemini workflow that starts with Nano Banana to generate 2D images of objects and then puts them through the Google-owned Atlas tech to convert them into 3D models within the game. Currently, players can create helmets for their characters this way, but the plan is to expand into armor, furniture and vehicles eventually -- like Animal Crossing in the far-flung future meets Fallout Shelter, Veen explained. Converting an image to a 3D item you can equip on a character takes about two and a half minutes to do through Gemini's servers, but since Colony is an "idle" mobile game where base-building progress happens in real-time, that delay is built into the mechanics. Veen added that Gemini has also sped up Parallel Studios developers' workflows, using it to help them code and give feedback for designs. The studio started work on Colony nearly a year ago and was going it alone for the first eight months, but partnering with Google and using its AI tech enabled it to do more work in the most recent three months than in the eight without it. Combined with Atlas, it's shown Veen and his team that "we can build a game that we otherwise wouldn't be able to." "I don't think we get here without Gemini," Veen said. It's worth noting that, absent the Google branding, there weren't any other major companies showing off gen AI integrations -- not even Microsoft, which was trumpeting its Copilot for Gaming initiative at last year's GDC. Despite a block of sponsored Xbox panels, the company's big news was that developer kits for its next console, codenamed Project Helix, would start going out in 2027. To be fair, GDC's main draw is looking backward, with most of its programming being panels of developers discussing lessons learned in the last year of game development. The highest-profile of these covered major games released in 2025 like Clair Obscur: Expedition 33, Silent Hill f and ARC Raiders. Most are smaller discussions split across different disciplines, such as audio, graphics or narrative. They're also a mix of official programs vetted by GDC parent company GSMA and sponsored ones that individual companies paid to host. The vast majority of AI-related panels were the latter, reinforcing that gen AI hadn't landed in last year's games in any big way. But there were some illuminating panels that I sat in, featuring developers from smaller studios that had been experimenting with gen AI in their pipelines. In one, product development specialist and founder of Unleashed Games, Irena Pereira, explained how using gen AI can help the "blank page problem" to generate, say, 500 crappy ideas and build the lone promising one out into a proper quest, item, character or story beat. "[You're] creating those compelling stories that really should be coming from you, but they can start in a more automated popcorn kind of zone that is brought to you by AI, and then at the end of the day, you finish it as a human," Pereira said. What's clear is the restraint: gen AI may be used in preproduction or organization, but nothing that ends up in the final product, Pereira said. That's probably wise considering the hair-trigger gamers are on for anything AI-related, even pouncing on Baldur's Gate 3 creator Larian last December after CEO Swen Vincke brought up using gen AI in ideating its next Divinity game, to the point that the studio confirmed in January that it would abandon using some generative tools to ensure it can trace the provenance of the art ending up in the final game. But in the same Reddit AMA in which Larian answered public questions, it also acknowledged experimenting with other machine learning tools to "reduce the 'mechanical legwork'" and speed up game development. That and other incidents have led to the backlash from players when they hear about any AI use in making games. For developers, it's more nuanced. David "Rez" Graham, AI programmer and lead developer of The Sims 4, who hosts AI roundtables for developers to share ideas at GDC, explained to me over email that the industry is against generated assets like art ending up in games, but that engineers have started to try out code assistance and generation tools like Claude Code or Codex. The difference, which Graham talked about in his Human Cost of Generative AI panel at GDC this year, was the split in intent between these tools: art generators like Midjourney are designed to replace artists, while most current code-generating tools are intended to assist and accelerate engineers' work, he said. Claude Code and Codex are useless unless you know what you're doing. To wit, Graham had Claude audit one of his projects of around 2,000 lines of code to look for bugs; of the twelve it reported, only two were true issues, while the other ten were false positives. If he'd let the gen AI tool apply the suggested fixes for the latter, it would have created ten new bugs, he said. "You still need significant programmer oversight, so the tools act more like an accelerator," Graham said. "As long as that remains true, I think engineering will continue to embrace them." In previous years of GDC, the halls of Moscone Center were draped in advertisements for gaming companies riding the latest wave of barely tech grifts, from blockchain to NFTs to web3. Now it's generative AI, and though the ads for them were less garishly slathered over the convention center this year, it's hard to shake its association with trendy waves from yesteryear. Yet generative AI applications seem to have more potential than those of other technologies, even if they're not even close to being widespread. Unlike the others, gen AI is being treated cautiously. Another panel I sat in, sponsored by AI audio acting company Lingotion, was titled "How to Build or Use Generative AI That Is Legally Compliant, Safe, and Ethically Sustainable." Though obviously pitching the company's services, the presenter carefully explained that the only way to ethically clone an actor's voice to generate lines for a game is to properly license all data from them directly and be clear about its purpose for generative AI -- then share revenue with them. Gen AI applications in gaming are still piecemeal. As in years past, I visited Nvidia's hotel room demo to get a peek at its tech behind closed doors, though it happened a week before the company's controversial reveal of DLSS 5 (which wasn't present). What I saw were less radical tech progressions, like last year's more modest DLSS 4.5 that reduced screen menu issues when upscaling graphics and offered better ray tracing in new games like Resident Evil Requiem. There was also a demonstration of Nvidia Ace, the company's suite of gen AI developer tools, specifically using the tech to power an advisor who would help players in Creative Assembly's Total War: Pharaoh. While Total War games follow the strategy model of offering generic advice, the advisor would make recommendations based on the player's situation; in the demo, an Nvidia employee typed questions that the gen AI-powered in-game assistant answered, like why a nearby province rebelled, but wouldn't share info outside the player's knowledge, like intel beyond the fog of war. Whether in-game or in development, gen AI tools aren't mainstream in gaming, at least not yet. We're starting to see some use cases at the outskirts of game development, but they're still far from being embraced by the biggest game companies in the world. Despite years of holding AI roundtables at GDC and working directly on AI applications in gaming, Graham is hesitant to make any predictions about the future -- things are moving too fast, and the gaming industry doesn't know how to tackle big issues with gen AI like using stolen work for training data, the environmental impact, the economic impact (like with the RAM shortage), the labor impact and more. Considering all the intense investment in the technology with little financial return, Graham compared this moment to the dot-com boom and bust, and he expects a similar subsequent wipeout of AI companies -- and when the dust settles, we'll see the final form of AI in gaming. Perhaps then, the US, the EU and other countries will set AI regulations, Graham theorized. But in the short term, Graham expects more companies to try to integrate gen AI into their experience. He pointed out that more games have been released with the technology, specifically pointing to the game Whispers From the Star released last August, which is extremely up-front about using AI to power dynamic conversations between the main character, a female astronaut, and the player who talks her through surviving a crash landing on an alien planet. "It has a "Very Positive" rating on steam, so it's clear that players aren't against gen AI as a whole, just when it's used in place of art," Graham said. For union leaders and game developers Hays and Uduwana, the reasons that gen AI is still only used in smaller games and not from the biggest names in gaming are mainly twofold: The tools aren't refined enough yet, and developers like themselves resisting using technologies that would threaten their fellow workers' employment. "I know anytime we even have any discussions about AI, it's like, it should never do something that you couldn't do yourself," Hays said. "If it's not an accelerant for you, then you're not using a tool. You're just having something that's replacing someone's job." Hays acknowledged that Microsoft, which owns his studio id Software, is a big backer of AI, but so far, the tech giant has only said it wanted gen AI tools that accelerate work productivity. The Big Friendly Union is taking Microsoft at its word. "We're not against movement forward. We're against things that are immoral, that take jobs, that are bad for the environment, that are bad for people," Hays said. "And if there are wins, then it would be OK. But there haven't been, which is why there's not a lot of movement." Gen AI's inability to rival what developers can make is a testament to their competence and skill, Uduwana said. Hopefully, this makes clear how the thousands of hours of labor going into making games delivers an attention to detail that players notice, that they think is compelling and creates an emotional response, he said. It's not hard to see how that positive reaction to conventionally made games is linked to players' negative reactions whenever they discover gen AI wasn't disclosed in the creation of new games. Sometimes, the truth comes out when gamers realize that crudely made visual assets or text were generated by AI. Even if it turns out that the materials were minimal or left in by accident, as with last year's game The Alters, players still feel betrayed and mistrustful of other parts of the game. "I do think that people who play games are smart about what they're consuming, and that they see the impact that generative AI has, and how it's leading to less quality control in the franchises that they are really excited about," Uduwana said. "And I think that those anxieties are things that there's common ground between the workers and the people playing the game." To these seasoned game developers, there's a pretty simple truth: It's not being used in big games yet for good reason. "There are plenty of studios that are pushing AI. They're not the ones that are doing well," Hays said. "Everyone sees it, and the players are rejecting it. So long as we want to be successful, we're not going to be using [AI] tools."
[2]
AI was everywhere at gaming's big developer conference -- except the games
AI was everywhere at the GDC Festival of Gaming this year. Vendors at the event pitched generative AI tools for things like making AI-driven NPCs and even entire games from a chat box. On the show floor, I spent 10 minutes playing a demo of a pixel-art fantasy world generated by Tencent's AI tools. In a briefing with Razer, I watched an AI assistant for QA automatically log issues in a shooter game. And there were many talks about AI, including a standing-room only presentation by Google DeepMind researchers about playable AI-generated spaces. But there was one key place where AI was missing: the games themselves. Of the many developers I spoke to at the conference, nearly every one was against the idea of using AI in their projects. "I feel like the human mind is so beautiful," The Melty Way developer Gabriel Paquette told me. "Why not use it?" It was a common refrain. Those I spoke to, most of whom were indie developers, disavowed AI, and many said they would never use the technology as it detracted from the human element of development. That's perhaps not surprising, given that a recent GDC survey found that 52 percent of respondents think "generative AI is having a negative impact on the game industry," which is up from 30 percent in 2025 and 18 percent in 2024. Some indie developers already go out of their way to show that their games are "AI free." The largely negative reaction to Nvidia's DLSS 5, which, in the publicly shown examples, added AI slop-like faces to recognizable game characters, almost certainly won't make smaller developers more interested in the technology. The general pitch for generative AI in gaming is that it might benefit both developers and players. In the most optimistic view of the technology, developers could use AI to help with tasks like debugging, QA, and idea generation, while players could use AI to help tailor games for themselves. Google Cloud executive Jack Buser, who helped launch Google Stadia and worked on PlayStation Now and PlayStation Home at Sony, says that generative AI is "the largest transformation in the games industry I have ever witnessed in my nearly 30-year career." But for many of those actually making games, the conversation is different. For instance, Adam Saltsman and Rebekah Saltsman, cofounders of the "collaborative" studio and publisher Finji, known for indie hits like Tunic and Chicory: A Colorful Tale, note that their works are defined in part by "a specific person or persons' fingerprints." In other words, a handmade, human quality, one that can include an element of surprise. "You can show people what it is, but you are going to break all of their expectations when they go and play it," Rebekah adds. That philosophy runs counter to the idea of utilizing generative AI in development. When I asked the Saltsmans if they would consider using generative AI for any of Finji's games, it was a hard no. "Absolutely not," Adam says. Many developers told me that, in their view, AI-made games don't look or feel like human-made games, at least right now. Audiences "don't connect" with generative AI, according to Abby Howard, from Slay the Princess developer Black Tabby Games, adding that "I think it's generic, I think it makes it feel cheap." Rebekah is more blunt, saying that generative AI "just looks like crap." For Matthew Jackson, who is working on the comedy game My Arms Are Longer Now, there's another practical issue: "AI is so not funny." There are also legal problems that would complicate actually selling a game made with generative AI. Putting aside issues like the environmental impact of AI or concerns about the data AI is trained on, the Saltsmans tell The Verge they don't think there is a legal framework to actually selling generative AI output. (This issue is also exacerbated by the fact that AI-generated art can't be copyrighted.) Finji isn't the only publisher that isn't accepting games made with generative AI. Panic, the publisher of Untitled Goose Game and creator of the Playdate, does not "have any interest in generative AI-created products," cofounder Cabel Sasser tells The Verge. BigMode, the publishing company started by Jason Gastrow, aka videogamedunkey, requires developers to check a box with their application that says "I confirm that my game is human-made and does not include any use of generative AI." Even Hasbro, which is now developing its own video games, isn't using AI in its development pipelines, CEO Chris Cocks recently said on Decoder. But perhaps what came up most often in my conversations at GDC is that using generative AI removes the craft from making video games. "The only way to get better at things is through the intense concentration of a career of applied craft," Black Tabby Games' Tony Howard-Arias says. Adam talked about how writing code can be "one of those things, like visual art, that pushes on your game design." He points out that good programming is also good for players: "Things that are really hard to program are often really hard for a player to understand, too." Alex Schleifer, cofounder of Ballgame developer Human Computer, says that the process of making games is just fun -- and from that process, "you're also going to come to better ideas." There are concerns that AI tools might take away jobs from humans, which would both lower the pool of available positions in an industry already riddled with layoffs and provide new developers fewer ways to get their foot in the door. But despite the promised cost and efficiency savings -- and that's assuming an AI tool can even compare to what a human can do -- this too would have problems. If you replace humans with AI, "where do you get new talent in the future?" Tony says. Right now, the developers I spoke with believe crafting games by hand creates a more human connection. "We tell human stories," Rebekah says. When you launch a game, there is a person that "you'll never meet in your whole life that is playing a thing that you've spent thousands upon thousands of hours considering and working on." Caring about their experience and that connection is "why we do this." Some indie developers I spoke with are open to the potential that generative AI in games could be useful for development or widely adopted down the line. The film and TV industry, for example, is seeing the rise of companies that build bespoke AI models to help with production, which could be a possible future for AI tools for game development. Maybe, at some point, AI will be more accepted, Paquette says. But for now, he prefers to do "100 percent" handcrafted work. "That's something dear to me."
Share
Share
Copy Link
At the Game Developers Conference 2026, a stark divide emerged between tech companies promoting generative AI tools and game developers who overwhelmingly reject the technology. While 52% of developers now view AI negatively for the gaming industry, major publishers like Panic and BigMode refuse AI-generated games, citing concerns about human creativity, legal issues, and craft.
At the Game Developers Conference (GDC) 2026 held in San Francisco this March, a striking contradiction emerged. While tech companies like Google and Nvidia showcased generative AI tools across the show floor, game developers themselves expressed growing skepticism about the technology's role in their craft. According to GDC's annual state of the games industry report released in January, 52% of respondents now believe generative AI is bad for the industryβa dramatic increase from 30% last year and 18% in 2024
1
2
.
Source: The Verge
The pushback against generative AI intensified after Nvidia unveiled DLSS 5, its new graphics upscaling technology featuring AI-generated character makeovers. The drastically altered appearances resembled "yassified" styles common in cheap mobile games, triggering public backlash from gamers concerned that developers' original work was being changed without their input
1
.While 52% of survey respondents reported that generative AI was used at their company, only 36% said they personally use it in their jobs, with some noting it remains optional. Among those using AI tools for game development, the primary applications include research and brainstorming (81% of respondents), writing emails and scheduling (47%), and code assistance (47%)
1
.Chris Hays, lead services programmer at id Software and lead organizer at the studio's Big Friendly Union, challenged the notion that generative AI represents transformative technology. "People weren't begging people to use the web when it came out. If [generative AI] was really as revolutionary as the web, people would be using it," Hays said
1
.Sherveen Uduwana, treasurer at the United Videogame Workers union, expressed skepticism about AI's efficiency in game development. "We're not seeing the number of revisions that are happening after these AI-generated content, where essentially a worker is going in and fixing all these mistakes to the point that it possibly could have been done without the AI in the first place," Uduwana noted. Hays added that freelancers have benefited from the AI push, as they're hired to fix AI's mistakes
1
.Indie developers at GDC emphasized the human creativity that defines their work. Gabriel Paquette, developer of The Melty Way, captured the sentiment: "I feel like the human mind is so beautiful. Why not use it?"
2
.Adam and Rebekah Saltsman, cofounders of Finjiβthe studio behind indie hits like Tunic and Chicory: A Colorful Taleβexplained that their games are defined by "a specific person or persons' fingerprints," a handmade quality that includes elements of surprise. When asked if they would consider using generative AI for any of Finji's games, Adam responded with a hard "Absolutely not"
2
.Abby Howard from Black Tabby Games, developer of Slay the Princess, observed that audiences "don't connect" with generative AI, adding "I think it's generic, I think it makes it feel cheap." Rebekah Saltsman was more direct, stating that generative AI "just looks like crap"
2
.Related Stories
Major publishers are establishing clear policies rejecting AI generated content concerns. Panic, publisher of Untitled Goose Game and creator of the Playdate, confirmed it has no interest in generative AI-created products. BigMode, the publishing company started by Jason Gastrow (videogamedunkey), requires developers to check a box confirming "I confirm that my game is human-made and does not include any use of generative AI." Even Hasbro CEO Chris Cocks stated the company isn't using AI in its development pipeline
2
.The Saltsmans highlighted legal complications that would prevent them from selling games made with generative AI, noting there isn't a proper legal framework for selling such output. Copyright issues further complicate matters, as AI-generated art cannot be copyrighted
2
.The slow adoption of AI reflects a fundamental tension between technological capability and creative values. Tony Howard-Arias from Black Tabby Games emphasized that craft matters: "The only way to get better at things is through the intense concentration of a career of applied craft"
2
.Despite Google Cloud executive Jack Buser calling generative AI "the largest transformation in the games industry I have ever witnessed in my nearly 30-year career," the reality at GDC told a different story. Vendors pitched AI tools for NPCs and Quality Assurance, including demos of pixel-art fantasy worlds generated by Tencent and Razer's AI assistant for QA that automatically logs issues. Yet these AI tools for game development remained largely absent from actual games on display
2
.The game developer reluctance to AI stems from practical concerns about workflow integration, player backlash over altered artistic vision, and the belief that removing human craft diminishes what makes games compelling. As the labor impact continues to unfold and machine learning capabilities expand, the gaming industry faces ongoing questions about balancing technological advancement with the creative integrity that defines memorable gaming experiences.
Summarized by
Navi
19 Feb 2026β’Entertainment and Society

14 Mar 2026β’Technology

17 Jul 2025β’Technology
