Curated by THEOUTPOST
On Wed, 16 Oct, 12:06 AM UTC
3 Sources
[1]
Meet the developers integrating generative AI into a new video game
As Artificial intelligence moves into every corner of modern life, we examine the ways AI enhances how we have fun and seek connection. While generative AI is still in its infancy, it's set to play a major role in video game development going forward. Imagine if NPCs in your favorite game had an infinite number of responses and could understand anything you said to them. Or imagine if a world could change completely based on how you played the game, with each change being totally unique to your specific scenario. While we're perhaps at least a year or two away from new AAA games leveraging generative AI to enhance gameplay, there are plenty of studios and developers working on smaller scale games that make use of the technology for a more interesting and immersive experience. One of the first of those is Jam & Tea Studios, which is actually a new studio that's building its first game with generative AI included as a core component of gameplay. The game is called Retail Mage, and it's essentially a role-playing game in which players take on the role of a wizard working at a magical furniture store. The goal of the game is to help customers with their requests as they come in, but how you fulfill those requests can vary widely depending on how you approach the game. When you talk to an NPC in the game, you'll be able to type text into a dialogue box depending on how you want to respond to customers. You'll then be given four different dialogue options to choose from - so while your inputted text isn't exactly what your response will be to the NPC, it will guide what you say, and what NPCs respond to. While Retail Mage is the first game from Jam & Tea Studios, the talent behind the studio is no stranger to the video game industry. It was actually founded by longtime developers from the likes of Riot Games and Wizards of the Coast, so you can be sure that there are top developers behind it. The game itself isn't widely released just yet, but it is in a public beta in which users can sign up to playtest the game. I was given a walkthrough of the game and found it incredibly compelling. Again, we're not quite at the point where players can simply speak whatever they want into a game and have NPCs respond as naturally as a human would. But it's not hard to see how we could get there in the near future. Being able to create your own responses to NPCs, instead of always being forced to choose from a few canned responses, makes a big difference in the immersiveness of the game - and means that no two play-throughs could possibly be the same. Not only that, but beyond leveraging AI for NPC interactions, the game uses it for object interactions too. Players can approach any object in the game and tell the game what they'd like to do with that object, with the game reacting appropriately. It's not always able to visually represent changes to objects, but it will note those changes in text form. Retail Mage is likely only one of the first of many new games that take advantage of generative AI. To get a better understanding of how Jam & Tea approaches developing a game with AI in it, we had a chance to chat with M. Yichao, co-founder and chief creative officer at Jam & Tea, and a narrative designer who worked on Guild Wars 2 and League of Legends. M. Yichao: There are many folks seeking to answer the question "what can [generative] AI bring to video games." Some focus on the developer side, with [a] strong conviction that new tools for content generation will speed up and enhance production pipelines. While we expect various tool innovations to emerge, Jam & Tea sees the true innovation and platform shift on the player side, in the new experiences and game types that [generative] AI could enable at runtime. That said, I fundamentally don't believe AI-powered NPCs are better than traditionally scripted ones. I do believe AI powered NPCs can be more improvisational and reactive to player actions. To draw an analogy to another form of media: there are scripted sketch shows like "I Think You Should Leave" and improv shows like "Whose Line Is It Anyway." Both are vibrant modes of entertainment, and neither is better than the other. Similarly, a [generative] AI-powered game (where players can steer the story) will be a different experience than a scripted, linear story (with carefully crafted and scripted lines). On the technical side, we've found promising results through a combination of known methodologies of fine tuning and training bespoke models, as well as a proprietary approach to our game architecture. This, coupled with curated prompts that make specific requests and calls, helps to keep output within bounds. On the design and content side, we've crafted various systems that maintain output relevance while gently easing the creative burden on players. Rather than having players type or say the exact dialogue, players can express their intent (e.g., "I tell a joke"), and the system provides dialogue options based on their suggestion. This way, if they're roleplaying a loquacious bard, they don't have to be a brilliant writer for their characters to fulfill that fantasy in the game. Similarly, our characters are designed with guardrails and directions on how to react to various scenarios, how much to omit "real world" references, and other instructions -- much as actors in immersive theater or escape rooms are trained to interact with curious, playful audiences. One area where we've made significant advancements -- putting us ahead of many larger teams in this space -- is our ability to have our characters not only converse (AI chatbot-level functionality), but also plan and execute actions and behaviors in the game world. We actually dialed this back a little for Retail Mage, as customers finding all the items they're looking for on their own, or even helping other customers before players can, isn't ideal when players should be the (retail) heroes! On the AI R&D front, we've made exciting progress through proprietary methodologies. And as we refine our foundational technology, we also leverage our creative design and gameplay craft to build in moments where strangeness becomes an intentional part of the charm, rather than feeling like errors or bugs. I've seen an NPC who "hallucinated" about a non-existent area of the store get corrected by other NPCs, while some NPCs started a conspiratorial rumor about a "hidden back room" as a result! Fun fact: we'll have worked on Retail Mage for just 6 months from start to ship! Before focusing on this smaller game, our studio was immersed in R&D, building and testing many [generative] AI features for our larger RPG title, codenamed Project Emily. In that work, we developed many exciting mechanics, combining traditional procedural generation techniques to create a more responsive fantasy world for players to explore. However, we know the best way to test and validate is by shipping games to players. So we selected a few of our most exciting mechanics to build Retail Mage -- a smaller, more indie-scale experience that will give players their first taste of what's possible in an improvisational game world. In Retail Mage, [generative] AI is woven into everything from character creation to NPC logic to our item systems and beyond. The two main mechanics players will experience most directly are dialogue with NPCs and item interactions. In a traditional game, object interactions -- say, with a chair -- are preprogrammed: sit, pick up, destroy... In our game, in addition to these common interactions, players can freely state what they want to do with any object: disassemble the chair for parts, paint it blue, carve a customer's name into the chair. The game system will run checks to see if the player succeeds based on their character traits -- are you strong enough to lift the chair? -- environmental factors -- do you have tools to take the chair apart? -- previous events -- did you get the chair from a fire-damaged corner of the store? -- and more. The goal is to encourage players to get creative in how they interact with the world, and to honor and say yes to their outside-the-box ideas and playful interactions. Every time I watch a playtest, I'm delighted by the surprising ways players craft, invent, conjure, and create the items customers ask for. We fundamentally believe that as [generative] AI use proliferates, the need for human creators will increase, not decrease. [Generative] AI makes it easier than ever to generate content. But as I said in the AP article: infinite content without meaning is just infinite noise. Put another way: if no person bothered to create the experience, why should we expect any person to bother to consume it, much less enjoy it? Developers, voice actors, artists, and all creatives at our studio are crucial to crafting meaningful experiences that will delight and bring folks together. We have friends at other startups and at large companies exploring this space and its possibilities. And there are other companies building tools they hope developers will use in creating AI-powered games. However, we're confident that our vision for the new kinds of games this tech enables will set us apart. Our founders' backgrounds combine improv storytelling, new tech integration, and multi-platform game launches into a small but mighty team. Right now, we're just a team of eight. We've had much larger companies reach out to us and put their vote of confidence behind our tech and work because we're thinking differently about the space, with a core mission of "how can this technology enable people to connect with each other more," rather than "what problems could this technology solve?" We believe the best way to validate tech is to create products that engage and inspire players. Retail Mage is just the start. We're already fielding inquiries about our tools and platform, and as a company, we're excited to not only continue building great games but also to empower others through licensing our technology, as well as other models.
[2]
No, generative AI isn't going to take over your PC games | Digital Trends
Surprise -- the internet is upset. This time, it's about a recent article from PC Gamer on the future of generative AI in video games. It's a topic I've written about previously, and something that game companies have been experimenting with for more than a year, but this particular story struck a nerve. Redditors used strong language like "pro-AI puff piece," PC Gamer itself issued an apology, and the character designer for Bioshock Infinite's Elizabeth called the featured image showing the character reimagined with AI a "half-assed cosplay." The original intent of the article is to glimpse into the future at what games could look like with generative AI, but without the tact or clear realization of how this shift affects people's jobs and their creative works. Recommended Videos But don't worry. The generative AI future dreamt up out of this story by the internet isn't coming any time soon. And rather than operate in a binary of pro-AI or anti-AI sentiment, I want to look at how AI is being used today in games, and how it could be used in the future, to offer better performance and visuals, and allow developers to push the envelope on what they're able to deliver. Get your weekly teardown of the tech behind PC gaming ReSpec Subscribe Check your inbox! Privacy Policy It ain't so simple Half Life with ultra-realistic graphics Gen-3 video to video Runway ML Artificial intelligence Before getting to AI in PC games today, we need to define some terms, because that's the whole crux of this fiasco. In the original article, the author looked at several videos that reimagine old games with AI through the Runway ML tool. The model is fed with a final frame from the game, and then it generates a realistic-looking video based on that input. It looks terrible, as you might suspect, but it's not hard to see a video like this and imagine the future where this kind of tech looks much more realistic. And make no mistake -- at some point, we will have generative AI that's much more realistic in the future. This is generative AI. You give the model an input and it spits out an output based on its training data, and in a content-agnostic manner. It doesn't have a set of rules or algorithms -- it attempts to understand the input based on its training and produce an output, no matter how flawed it may be. Predictive AI is slightly different. It instead works by training on a series of data and predicting the most likely outcome for future data. Generative AI is ChatGPT; Predictive AI is a Netflix recommendation. Where does that leave tools like Nvidia's DLSS? There really isn't a clean line. Is it actually generating new frames as Nvidia suggests? Or is it just applying better prediction mechanisms to long-standing frame interpolation algorithms? There are people much more qualified to argue on the semantics here than I am, but it doesn't take an AI scientist to see that what Runway ML and DALL-E are doing is not the same as what DLSS Frame Generation is doing. Not even close. There's certainly some future where Nvidia could apply a FreeStyle filter over an existing game to offer more realistic visuals, as the original article suggests, but that's not going to be how the game is meant to be played. The game still needs to be rendered -- the filter, in this case, is nothing more than a filter. And to go backward by only rendering primitive objects and relying on AI to fill in the texture, lighting, and shadow details only gives the AI model less information to work with, not more. If this is the future we're heading toward, it's a long way off. The best evidence that this style of generative AI isn't going to take over your PC games any time soon, however, is Nvidia. The company's CEO may wax poetic with press and analysts about how all pixels will be generated in the future, not rendered. But actions speak louder than words, and Nvidia's investments in tools like RTX Remix and the Half-Life 2 RTX project paint a much different picture. AI has a ton of applications in game development, but like most tech, the best approaches are targeted at solving pain points in development and gameplay. Oh yeah, the rendering pipeline Elder Scrolls 3: Morrowind - Auto-Enhanced with Nvidia RTX Remix AI The problem with these videos of "remastered" games using Runway ML is that they frame how games are actually rendered completely wrong. They use the final frame of a game as an input, ignoring a lengthy rendering pipeline that actually produces that final image. Even AI tools like DLSS that come before the final frame is presented are still located very far down on the rendering chain after most of the work is already done. The exciting developments in generative AI are how you can apply the tech to different parts rendering and game development. That brings us back to RTX Remix, which is a project that Nvidia has heavily pushed over the last couple of years with its AI features. It can up-res textures, convert material for use with ray traced lighting, and so much more. It's a remarkable tool bolstered by AI, and you don't have to take my word for it. Just download Portal with RTX on Steam and see RTX Remix in action. We're not talking about some AI-driven future where every game is robbed of creative liberty. We're talking about tools using AI to enable more creativity. According to AMD's Chris Hall, the exciting applications of AI in games come through in the most mundane places. "If you look at Epic Games and the Unreal Engine, they were showing off their ML [machine learning] cloth simulation technology just a couple of months ago," Hall said. "Seems like a very mundane, uninteresting use case, but you're really saving a lot of compute by using a machine learning model." I published an in-depth interview with Hall a few months ago that goes into detail about these applications of AI in games, but we're seeing it everywhere already. Just last month, we saw the debut of GenMotion.AI, which promises to deliver high-quality animation from text prompts using AI. Nvidia already has its Ray Reconstruction feature available, which applies AI at one of the most troublesome areas of games with ray tracing -- denoising. True to Hall's word, if AI is being used in game development or rendering, "it really needs to solve a problem that exists." The exciting developments in AI games aren't with throwing an old game at an AI model to see whatever jank it spits out -- regardless of how many clicks it drums up. It's about targeting AI at problem areas. Maybe it provides better physics simulations; maybe it cleans up the lighting effects delivered by ray tracing. Maybe it improves performance by shortcutting traditional rendering techniques. For both developers and gamers, these are tools to get excited about. Let's talk about people's jobs I'm not clueless to the reality here, nor to the narrative that greedy publishers will use generative AI to rob workers of their creativity and livelihood. It's hard not to be cynical when you see things like GameNGen, which shows a fully-playable version of Doom running at 20 frames per second (fps) solely through generative AI. It's an AI-driven game engine. It's important to recognize that these things are research projects, the topics of AI data scientists, not executives at the head of game publishers. Game publishers -- who are in the business of selling games -- will try to leverage generative AI to shortcut the process and squeeze out higher profits. They already have, with recent layoffs at Activation Blizzard and Xbox hitting 2D artists the hardest. Activision Blizzard even reportedly sold a bundle of items for Call of Duty Modern Warfare 3 featuring AI-generated images. The AI infiltration is happening, and it will continue to happen. Ever wanted to play Counter-Strike in a neural network? These videos show people playing (with keyboard & mouse) in 💎 DIAMOND's diffusion world model, trained to simulate the game Counter-Strike: Global Offensive. 💻 Download and play it yourself → https://t.co/vLmGsPlaJp 🧵 pic.twitter.com/8MsXbOppQK — Eloi Alonso (@EloiAlonso1) October 11, 2024 Still, it's important to recognize where we are today with AI in games and the future tools like Runway ML paint. We're probably talking about decades, not years, before a fully AI-generated game is possible, even accounting for the rapid pace of AI development. Even at that point, is an AI-generated game practical? Or more importantly for game publishers, is it profitable? And if it is, will we have safeguards in place to distinguish AI content and protect the rights of workers? You can't throw the baby out with the bathwater here. AI is already in PC games, and it's only going to become more prominent. As Hall put, AI is "inevitable." There's a middle ground here where you can recognize the great things AI is doing in games while also advocating for the rights of workers displaced by haphazard use of the tech -- in the non-digital land of gaming, the use of AI-generated images in Magic: the Gathering marketing material prompted such fierce backlash that Wizards of the Cost (who creates Magic) removed the images and doubled down on its policies of art being fully made by humans. Striking that middle ground is not only aligned with reality but it also helps push this evolving technology in the right direction. Going to the extremes serves no one. Dreaming up a future where AI is magically spitting out games is just as harmful as burying your head in the sand about the active harm that generative AI is doing in game development. In Wired's investigation of Activision Blizzard and Xbox -- which revealed the above details about 2D artists being laid off -- a veteran AAA developer going under the pseudonym Violet said the following: "[AI] is bad when the end goal is to maximize profits. AI can be extremely helpful to solve complex problems in the world, or do things no one wants to do -- things that are not taking away somebody's job." A veteran AAA developer can recognize the nuance because they live it every day. We should be able to recognize that nuance, too. Maybe then the game industry can move onto to solving the real problems with generative AI in game development like the use of copyrighted works, the use of works created by a human author to train an AI model, and the right of workers displaced by AI. That's certainly a more productive discussion about the future of AI in games than separating into pro- or anti-AI sentiment.
[3]
Blizzard co-founder 'blown away' at the progress of generative AI
Like it or not, AI is here to stay...and it could help transform some aspects of the entertainment industry, especially video games. Artificial intelligence is a hot-button topic for many reasons. Generative AI is seen as a potential threat that will impact jobs at a time when the games industry is already culling tens of thousands of workers in an effort to save money, reduce costs, and improve margins for the illusion of perpetual growth. But executives and high-level creatives also see AI as an opportunity. EA, for example, is using generative AI to completely revolutionize user-generated content. Xbox is also using generative AI to help with writing and dialog of its games. Other teams have been more quiet on their AI usage, but it's fair to say that a portion of the biggest companies in gaming are using it in some way--except for Nintendo. In a recent interview with Bloomberg's Jason Schreier, Blizzard co-founder and ex-President Mike Morhaime shared his thoughts on AI, generative and otherwise. Morhaime is currently the CEO of Dreamhaven, a newer studio he created after leaving Blizzard in 2019. "We've been blown away at the progress of generative AI. We aren't yet sure how to leverage that for actual game development, or gameplay, I think you can imagine it could have a huge impact. "I think AI is going to have an impact on all sorts of aspects of our lives and how we interact with each other. We're using it for...you know, ChatGPT is a great sounding board for a lot of things, so we're using it for that. Obviously, note taking and meetings and things like that." Morhaime, whose team had been developing the turn-based tactical RPG Sunderfolk for years, says the games he's making aren't young enough to really super benefit from generative AI. And then there's the legal implications of AI--tech is moving so quickly that tech law hasn't been able to keep up, with most laws so old and clandestine that it may take years before things are ironed out completely. "These games that we're developing are far enough along that there's really not a big opportunity, but it's something we're paying a lot of attention to. "I think that we're also aware that there are big challenges as well in our industry and in other creative industries, because we have to figure out the copyright, legal, and ethical issues. I'm hoping these will be solved over time" The real benefit of AI, though, could be efficiency and streamlining work processes. AI is meant to be a tool, not a replacement, that empowers creators and developers, as Morhaime explains: "I think in terms of making creators more efficient, we're seeing some amazing things that Co-pilot is able to do with software engineering, so I do think that there are opportunities there. There's some amazing art tools as well, but I think you really need to be able to put these tools in the hands of talented individuals to really get the benefit." The Dreamhaven CEO isn't the only one who thinks this. Fallout co-creator Tim Cain also had some interesting things to say about AI, affirming that it could present big opportunities for smaller teams. "I think we're going to see bigger, more complex games made by smaller teams. Because AI will let them do some things that previously required large teams or specialist people on those teams." Sunderfolk, the new game from Morhaime's Dreamhaven, is due out sometime in 2025. You can wishlist the game here.
Share
Share
Copy Link
As generative AI makes its way into video game development, industry leaders and developers share their perspectives on its potential impact, benefits, and challenges for the future of gaming.
As generative AI continues to make waves across various industries, the video game sector is experiencing its own surge of interest and experimentation. Developers and industry veterans are exploring the potential of this technology to revolutionize game development and enhance player experiences [1][2][3].
Jam & Tea Studios, founded by former developers from Riot Games and Wizards of the Coast, is at the forefront of this trend with their upcoming game, Retail Mage. This role-playing game incorporates generative AI as a core component of gameplay, allowing players to interact with NPCs using text inputs that guide dialogue options [1].
M. Yichao, co-founder and chief creative officer at Jam & Tea, explains their approach:
"We've crafted various systems that maintain output relevance while gently easing the creative burden on players. Rather than having players type or say the exact dialogue, players can express their intent, and the system provides dialogue options based on their suggestion." [1]
While some developers are enthusiastic about the potential of generative AI, others urge caution. Mike Morhaime, Blizzard co-founder and current CEO of Dreamhaven, expressed both excitement and concern:
"We've been blown away at the progress of generative AI. We aren't yet sure how to leverage that for actual game development, or gameplay, I think you can imagine it could have a huge impact." [3]
Morhaime also highlighted the need to address copyright, legal, and ethical issues surrounding AI use in creative industries [3].
Generative AI is already being utilized in various aspects of game development:
Tim Cain, co-creator of Fallout, predicts that AI will enable smaller teams to create bigger, more complex games by handling tasks that previously required large teams or specialists [3].
Despite the excitement, there are significant challenges to overcome:
As the industry continues to explore the potential of generative AI, it's clear that its role in gaming will evolve. While it may not completely replace traditional development methods in the near future, AI is likely to become an increasingly valuable tool for developers, potentially streamlining processes and enabling new forms of player interaction [1][2][3].
Reference
Game developers are exploring the use of AI to create more interactive and lifelike non-player characters (NPCs) in video games. This technological advancement promises to enhance player immersion and create more dynamic gaming experiences.
7 Sources
Electronic Arts (EA) is heavily investing in artificial intelligence for game development. With over 100 active AI projects, the company aims to revolutionize game design, player experiences, and operational efficiency.
8 Sources
AI is transforming the gaming industry, from game creation to hardware advancements. This story explores how AI is being used to develop PC games and Nvidia's latest AI-focused innovations.
2 Sources
Nvidia unveils AI-powered NPCs at CES 2025, sparking debate about the future of gaming and the role of human creativity in game development.
3 Sources
PlayStation co-CEO Hermen Hulst discusses the potential impact of AI on video game development, emphasizing the importance of balancing AI innovation with human creativity.
7 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved