Curated by THEOUTPOST
On Sun, 6 Apr, 12:01 AM UTC
21 Sources
[1]
Microsoft releases AI-generated Quake II demo, but admits 'limitations' | TechCrunch
Microsoft has released a browser-based, playable level of the classic video game Quake II. This functions as a tech demo for the gaming capabilities of Microsoft's Copilot AI platform -- though by the company's own admission, the experience isn't quite the same as playing a well-made game. You can try it out for yourself, using your keyboard to navigate a single level of Quake II for a couple minutes before you hit the time limit. In a blog post describing their work, Microsoft researchers said their Muse family of AI models for video games allows users to "interact with the model through keyboard/controller actions and see the effects of your actions immediately, essentially allowing you to play inside the model." To show off these capabilities, the researchers trained their model on a Quake II level (which Microsoft owns through its acquisition of ZeniMax). "Much to our initial delight we were able to play inside the world that the model was simulating," they wrote. "We could wander around, move the camera, jump, crouch, shoot, and even blow-up barrels similar to the original game." At the same time, the researchers emphasized that this is meant to be "a research exploration" and should be thought of as "playing the model as opposed to playing the game." More specifically, they acknowledged "shortcomings and limitations," like the fact that enemies are fuzzy, the damage and health counters can be inaccurate, and most strikingly, the model struggles with object permanence, forgetting about things that are out of view for 0.9 seconds or longer. In the researchers' view, this can "also be a source of fun, whereby you can defeat or spawn enemies by looking at the floor for a second and then looking back up," or even "teleport around the map by looking up at the sky and then back down." Writer and game designer Austin Walker was less impressed by this approach, posting a gameplay video in which he spent most of his time trapped in a dark room. (This also happened to me both times I tried to play the demo, though I'll admit I'm extremely bad at video games, especially shooters.) Referring to a Microsoft Gaming CEO Phil Spencer's recent statements that AI models could help with game preservation by making classic games "portable to any platform," Walker argued this reveals "a fundamental misunderstanding of not only this tech but how games WORK." "The internal workings of games like Quake -- code, design, 3d art, audio-produce specific cases of play, including surprising edge cases," Walker wrote. "That is a big part of what makes games good. If you aren't actually able to rebuild the key inner workings, then you lose access to those unpredictable edge cases."
[2]
Microsoft has created an AI-generated version of Quake
Tom Warren is a senior editor and author of Notepad, who has been covering all things Microsoft, PC, and tech for over 20 years. Microsoft unveiled its Xbox AI era earlier this year with a new Muse AI model that can generate gameplay. While it looked like Muse was still an early Microsoft Research project, the Xbox maker is now allowing Copilot users to try out Muse through an AI-generated version of Quake II. The tech demo is part of Microsoft's Copilot for Gaming push, and features an AI-generated replica of Quake II that is playable in a browser. The Quake II level is very basic and includes blurry enemies and interactions, and Microsoft is limiting the amount of time you can even play this tech demo. While Microsoft originally demonstrated its Muse AI model at 10fps and a 300 x 180 resolution, this latest demo runs at a playable frame rate and at a slightly higher resolution of 640 x 360. It's still a very limited experience though, and more of hint at what might be possible in the future. Microsoft is still positioning Muse as an AI model that can help game developers prototype games. When Muse was unveiled in February, Microsoft also mentioned it was exploring how this AI model could help improve classic games, just like Quake II, and bring them to modern hardware. "You could imagine a world where from gameplay data and video that a model could learn old games and really make them portable to any platform where these models could run," said Microsoft Gaming CEO Phil Spencer in February. "We've talked about game preservation as an activity for us, and these models and their ability to learn completely how a game plays without the necessity of the original engine running on the original hardware opens up a ton of opportunity." It's clear that Microsoft is now training Muse on more games than just Bleeding Edge, and it's likely we'll see more short interactive AI game experiences in Copilot Labs soon. Microsoft is also working on turning Copilot into a coach for games, allowing the AI assistant to see what you're playing and help with tips and guides. Part of that experience will be available to Windows Insiders through Copilot Vision soon.
[3]
You can now play a real-time AI-rendered Quake II in your browser -- Microsoft's WHAMM offers generative AI for games
Yesterday, Microsoft unveiled WHAMM, a generative AI model for real-time gaming, as demonstrated in its demo starring the 28-year-old classic Quake II. The interactive demo responds to user inputs via controller or keyboard, though the frame rate barely hangs in the low to mid-teens. Before you grab your pitchforks, Microsoft emphasizes that the focus should be on analyzing the model's quirks and not judging it as a gaming experience. WHAMM, which stands for World and Human Action MaskGIT Model, is an update to the original WHAM-1.6B model launched in February. It serves as a real-time playable extension with faster visual output. WHAM uses an autoregressive model where each token is predicted sequentially, much like LLMs. To make the experience real-time and seamless, Microsoft transitioned to a MaskGIT-style setup where all tokens for the image can be generated in parallel, decreasing dependency and the number of forward passes required. WHAMM was trained on Quake II with just over a week of data, a dramatic reduction from the seven years required for WHAM-1.6B. Likewise, the resolution has been bumped up from a pixel-like 300 x 180 to a slightly less pixel-like 640 x 360. You can try out the demo yourself at Copilot Labs. The model's ability to keep track of the existing environment, apart from the occasional graphical anomaly, while simultaneously adapting to user inputs, is impressive, regardless of the atrociously bad input lag. You can shoot, move, jump, crouch, look around, and even shoot enemies, but ultimately, it's no more than a fancy showcase and can never substitute the original experience. As expected, the model isn't perfect. Enemy interactions are described as fuzzy, the context length is limited, the game incorrectly stores vital stats like health and damage, and it is confined to a single level. This announcement follows OpenAI's latest Ghibli trend, which has garnered a lot of negative attention. While I'm no artist, there's a certain human element to every piece of creative work that AI cannot truly recreate. Yet, with AI's current rate of development, we might see that fully AI-generated games and movies could be a reality within the next few years, and that's where things are heading. The sweet spot lies in AI enhancing, not replacing, creative works, like Nvidia's ACE technology, which can power lifelike NPCs. Parts of this technology are already integrated into the life simulation game inZOI. From a technological point of view, WHAMM still represents a step up from previous attempts, which were often chaotic, incoherent, and teeming with hallucinations.
[4]
Microsoft Releases Generative AI-Produced Version of '90s Classic Quake II
The demo is built on Microsoft Muse, a generative AI model it rolled out earlier in 2025, that can copy existing games based on training data. Microsoft has rolled out a free-to-play, completely AI-generated version of the '90s first-person shooter classic Quake II. Rather than using preset locations and enemies that are the same every time you play, like in a traditional video game, the new game's content is generated on the go by a Microsoft AI model, based on the content and style of the original Quake II. The new in-browser game is powered by Muse AI, a generative AI model designed by Microsoft to assist video game developers. Muse, developed in partnership with UK-based game studio Ninja Theory, was trained on data collected from real humans playing the 2020 Xbox game Bleeding Edge, including "more than 1 billion images and controller actions." Given time, Muse can slowly replicate the visuals and gameplay of an existing video game, according to Microsoft's research (provided it works as planned.) But if you're interested in giving the in-browser game a whirl, maybe temper your expectations for a modern gaming experience. The resolution is capped, and the controls seem slow and laggy by modern standards, Microsoft also appears to have capped the length of time you can play. Microsoft has been vocal about the potential of the new Muse model to help preserve classic games, which run the risk of becoming lost media or simply languishing in obscurity. In a video announcing the model posted in February, Microsoft Gaming CEO Phil Spencer discussed how the new models open up "a ton of opportunity" because of their ability to "learn completely how a game plays without the necessity of the original engine running on the original hardware." Despite the potential for generative AI to assist developers, it's unknown how the game industry will react to these types of innovations in the long term. We've seen creatives in other fields, from music to visual art to fiction, speak out about the potential of AI to deprive them of deserved revenue. Meanwhile, the video game industry was hit by some serious layoffs over the past year, including at Microsoft's own gaming division. Regardless, Microsoft certainly isn't the only tech giant making investments in generative AI for gaming. Last year Google showcased an AI-generated simulation of iconic 90's shooter Doom, which, like Microsoft's recent effort, didn't rely on code from the original game. Meanwhile, start-up Virtual Protocols demonstrated a text-to-video-powered version of 1985's Super Mario back in September 2024.
[5]
Playing this AI-rendered version of Quake II feels like a fever dream in the worst way
Summary Microsoft created a playable AI-generated Quake II demo using their WHAMM model. Gameplay experiences with the AI-generated Quake II are bizarre, with morphing enemies and distorted visuals. While the AI demo is impressive, it falls short in comparison to the actual Quake II gaming experience. Developers have been prodding at concepts for using generative AI in gaming. One of these concepts involves having the AI generate the entire game, which is just as weird as it sounds. We've seen people do their own take on this concept by feeding an AI tons of visual data based on a game and then allowing players to play in a generated world. Microsoft has released its own take using Quake 2, and while it's surprisingly playable, it still feels like something I'd dream up while running a nasty fever. Related Review: The LG 27GX790A-B might be a mouthful but it's worth the tongue twisting This is one super speedy slick esports display. Posts AI-generated Quake II is really, really weird As spotted by Tom's Hardware, Microsoft has made a playable version of Quake II that's entirely AI-generated. It's a tech demo for a model called WHAMM (World and Human Action MaskGIT Model) which Microsoft hopes will help developers create games. You can give this demo a spin for yourself over on the Copilot Gaming Experience, but if you'd rather not, I've given it a go myself. The first time I played, enemies spawned normally; however, whenever they fired a gun, the muzzle flash would "stick" on them and they would slowly distort until they were a mess of colors and shapes. The second time I played, you can watch along in the GIF above. This time, the enemies didn't even spawn; they were dead the moment I entered the room. Trying to look at them caused the AI to morph and change what the bodies looked like, changing from whole corpses to individual chunks, flashes of limbs in the corner of my eye that vanished when I looked at them. Finally, I ran into a dark corner to see if the AI could "remember" where I was when I re-emerged, only to be dumped into a dark, hellish place with a fan at the end. I'll admit, it was a good way to experience a fever dream without the actual illness. Was it impressive for an AI? Sure. Would I play it over the actual Quake II? Not in a million years. Still, it was interesting to see how the AI "saw" the world of Quake, even if it killed all the bad guys before I could even fight them. Maybe a little more time in the oven. Or LAN parties.
[6]
Quake II runs on WHAMM, Microsoft's experimental AI for real-time gaming
In a nutshell: Microsoft has demonstrated Quake II running on a generative AI model for real-time gaming called WHAMM. While the game has full controller support, it predictably runs at very low frame rates. Microsoft says the demo showcases the model's potential rather than presenting a finished gaming product. Microsoft's World and Human Action MaskGIT Model, or WHAMM, builds on its earlier WHAM-1.6B version launched in February. Unlike its predecessor, this iteration introduces faster visual output using a MaskGIT-style architecture that generates image tokens in parallel. Moving away from the autoregressive method, which predicted tokens sequentially, WHAMM reduces latency and enables real-time image generation - an essential step toward smoother gameplay interactions. The model's training process also reflects substantial advancements. While WHAM-1.6B required seven years of gameplay data for training, developers only taught WHAMM on one week of curated Quake II gameplay. They achieved this efficiency by using data from professional game testers focusing on a single level. The GenAI's visual output resolution also got a boost, going from 300 x 180 pixels to 640 x 360 pixels, resulting in improved image quality without significant changes to the underlying encoder-decoder architecture. Despite these technological strides, WHAMM is far from perfect and remains more of a research experiment than a fully realized gaming solution. The model demonstrates an impressive ability to adapt to user input. Unfortunately, the model struggles with lag and graphical anomalies. Players can perform basic actions such as shooting, jumping, crouching, and interacting with enemies. However, enemy interaction is notably flawed. Characters often appear fuzzy, and combat mechanics are inconsistent, with health-tracking and damage stat errors. The limitations extend beyond combat mechanics. The model has a limited context length. The model forgets objects that leave the player's view for longer than nine-tenths of a second. This drawback creates unusual gameplay quirks like teleportation or randomly spawning enemies when changing camera angles. Additionally, the scope of WHAMM's simulation is confined to a single level of Quake II. Attempting to progress beyond this point freezes image generation due to the lack of recorded data. Latency issues further detract from the experience when scaled for public use. While engaging with WHAMM may be enjoyable as a novelty, Microsoft did not intend for it to replicate the original Quake II experience. Its AI developers were merely exploring machine-learning techniques they could use to create interactive media. Microsoft's team explored WHAMM's possibilities amid broader discussions about AI's role in creative industries. OpenAI recently faced backlash over its Ghibli-inspired AI creations, highlighting skepticism about whether AI can replicate human artistry. Redmond has positioned WHAMM as an example of AI augmenting rather than replacing human creativity - a philosophy echoed by Nvidia's ACE technology, which enhances lifelike NPCs in games like inZOI. While fully AI-generated games and movies remain elusive, innovations like WHAMM signal they could be right around the corner. Looking ahead, Microsoft envisions new forms of interactive media enabled by generative models like WHAMM. The company hopes future iterations will address shortcomings while empowering game developers to craft immersive narratives enriched by AI-driven tools.
[7]
Copilot Is Hallucinating a Playable Version of Quake II
Microsoft is demonstrating its new WHAMM model with a real-time generative version of Quake II. You can play the AI-generated game in your browser, although it's really just a proof of concept -- it's not very fun. Real-time generative gaming will destroy or uplift the gaming industry, depending on who you ask. In any case, development of generative gaming models is accelerating at a rapid pace. Researchers have figured out how to run Doom within Google's GameNGen neural learning model, OASIS AI lets you play an insane generative version of Minecraft in the browser, and in February of 2025, Microsoft introduced its unique WHAM generative gaming system. The WHAM-1.6B model that Microsoft showed off six weeks ago was impressive but impractical. It generated a single frame per second at 300 x 180 resolution, and it required seven years worth of training data to create a "playable" game. Microsoft began work on the upgraded WHAMM (World Human Action MaskGIT Model) shortly after debuting WHAM-1.6B, and the results are kind of shocking. Not only does WHAMM work at a 600 x 340 video resolution (twice that of WHAM-1.6B), but it outputs images at a minimum 10 FPS and managed to copy Quake II with just one week of training data. A refined architecture is responsible for these improvements. Instead of using a typical autoregressive "LLM-like" system where the AI model generates one token at a time, WHAMM's MaskGIT setup can "generate all of the tokens for an image in as many generations as we want." In other words, the new model utilizes parallel processing to boost output speed, image quality, and predictive accuracy. Games generated by WHAMM are, from a practical standpoint, not very fun to play. It looks blurry, smudgy, and crusty, the frame rate isn't ideal (though it's not all that different from what gamers experienced in 1997), and in-game enemies are practically unrecognizable. The demo is excruciatingly laggy, too, though Microsoft blames the "noticeable latency" on its web player, rather than the model itself. WHAMM also suffers from the "short-term memory" problem that we see in other generative gaming models. As a predictive model, WHAMM generates new frames by looking at previous frames -- it's bad at keeping track of health and ammo, enemies may disappear if you look away from them (or randomly appear for no reason), and if you push your character against a wall or stare at the floor, you may be teleported to a different location on the map. However, in my testing, WHAMM seems to have fewer "short-term memory" problems than some other models. Its 0.9-second context length is just good enough to prevent the brain-bending, trippy craziness that I experienced when playing with generative Minecraft, although context length is clearly a huge challenge that Microsoft will need to overcome. I should also note that WHAMM was only trained on the first level of Quake II. If you get on the elevator at the end of the level, the model freezes. So, Microsoft's assertion that WHAMM can be trained on a weeks' worth of video data is kind of misleading -- the model requires less training data than WHAM-1.6B, but the amount of data required to generate an interactive game will vary based on content length, game complexity, and other factors. Related 6 Devices That Can't Run DOOM Maybe DOOM can't run on everything after all. Posts 11 As for how this technology will be used in the future -- well, Microsoft knows that real-time generative AI can produce "new kinds of interactive media," but it's still exploring what that media should be. You can play the real-time generated version of Quake II at the Copilot Labs website. Games are timed and will reset when the timer runs out. Again, this game is just a proof of concept, so don't expect too much from it. Source: Microsoft
[8]
Microsoft Copilot just generated an AI version of one of the most iconic shooters of all time, and you can play it for free
Created by Microsoft's Muse AI, the tech demo is playable in browser Have you ever played an AI-generated video game? If not, now's your chance, as Microsoft has created a playable AI version of Quake II using Copilot. Quake II, one of the best shooters of all time, was released in 1997 and was developed by id Software, creators of Doom. The AI-generated version of Quake is running on Microsoft's Muse AI model which can generate gameplay from prompts and is fully playable in a web browser. While the gameplay is pretty laggy and not going to be anywhere near as good as playing Quake II on original hardware, it's insanely impressive to see AI's game development capabilities. The game runs at 640 x 360 and I'd estimate the frame rate is at around 15fps. I did find that the frame rate fluctuated depending on each playthrough, and while one attempt was very laggy, another was almost passable. I did notice, however, that enemies were incredibly blurry, causing the horror element of the game to largely lose its intended effect. Microsoft is limiting the amount of time you can play Quake II via Muse AI, so don't expect to finish the whole game with this tech demo. To play Quake II in your web browser, simply head to the Copilot Gaming Experience website. From there, you'll need to confirm you're over the age of 18, and then Quake II will load. I've tested the game in Firefox and Chrome and found the results to be of a similar nature. Controller input is shown to the left of the game, and when you reach your allotted time limit for gameplay, you can simply start a new game. While I don't think anyone is going to get a great gaming experience out of this tech demo, it's so cool and nostalgic that it's worth testing out. This isn't the first time we've seen AI recreate iconic video games. Last year, Google's AI gaming engine, GameNGen, recreated DOOM in real time. I wonder what's next for AI game development. Maybe we'll see the recreation of Pokemon Red or Blue next, although I doubt Nintendo would be happy about that...
[9]
Microsoft unveils AI-generated demo 'inspired' by Quake 2 that runs worse than Doom on a calculator, made me nauseous, and demanded untold dollars, energy, and research to make
Before the disaster that was Stadia, Google demoed its game streaming tech via a free version of Assassin's Creed: Odyssey you could play in your browser. My fiancée has fond memories of whiling away slow nights at work playing this massive triple-A game on a crummy library OptiPlex. Whatever came after with Stadia and game streaming in general, that demo felt like black magic. If a genuinely impressive tech demo can lead to a notorious industry flop, what about a distinctly unimpressive one? What are the ethics of expending massive amounts of capital, energy, and man hours on not even a worse version of a game from 30 years ago, but a vague impression of it? These are the questions I pondered after having gotten motion sickness playing a game for the second time in my life with Microsoft's Copilot AI research demo of Quake 2. "This bite-sized demo pulls you into an interactive space inspired by Quake II, where AI crafts immersive visuals and responsive action on the fly," reads Microsoft's Q&A page about the demo. "It's a groundbreaking glimpse at a brand new way of interacting with games, turning cutting-edge research into a quick and compelling playable demo." This demo is powered by a "World and Human Action Model" (WHAM), a generative AI model "that can dynamically create gameplay visuals and simulate player behavior in real time." Perusing Microsoft's Nature article on the tech, it appears to operate on similar principles to large language models and image generators, using recorded gameplay and inputs for training instead of static text and imagery. This demo is not running in the original game's id Tech 2 engine. However Microsoft produces this demo, it's some kind of bespoke engine with an output that resembles Quake 2 because the AI model behind it was trained on Quake 2. I'm reminded of those demakes of Doom for Texas Instruments calculators, but instead of marshalling limited resources to create an inferior impression of a pre-existing game, the Copilot Gaming Experience is the result of Microsoft's (and the entire tech industry's) herculean push for generative AI. I don't know what the discrete Copilot Gaming project costs, but Microsoft has invested billions of dollars into compute, research, and lobbying for this technology. On Bluesky, developer Sos Sosowski pointed out that Microsoft's Nature paper lists 22 authors, as opposed to the 13 developers behind Quake 2. Based on the paper, Sosowski also estimated that Microsoft's new model required more than three megawatts of power to begin producing consistent results. That's assuming use of an RTX 5090, which Microsoft likely did not have access to given the timing of the paper's publication, but it's still helpful to get an idea of the scope of this project's power draw. According to battery manufacturer Pkenergy, a single megawatt requires 3,000-4,000 solar panels to produce. Despite all of that investment, the demo is not good. The Copilot Gaming experience runs like a slideshow in a tiny window at the center of the browser, its jerkiness and muddled, goopy visuals -- familiar to anyone who's seen an AI-generated video -- gave me a rough case of motion sickness after bare minutes of play. The only other game to ever have set my belly a rumblin', EvilVEvil, did so closer to the hour mark. And while chatbots will tell you to eat rocks and drink piss, the Copilot Gaming Experience has its own fun "hallucinations" -- the surreal, unnervingly confident errors produced by generative AI models that massive amounts of money, compute power, and uninhibited access to copyrighted material can't seem to address. Looking at the floor or ceiling at any time in the Copilot Gaming Experience has about an 80% chance of completely transforming the room in front of you, almost like you teleported somewhere else in the level. Of course, there is no "level," goal, or victory condition: The Copilot Gaming Experience is just constantly generating a new Quake 2-like bit of environment in front of you whenever you turn the corner, with what came before seemingly disappearing as you go. One such warp moment sent me to the Shadow Realm, a pitch black void out of nowhere which took some finagling to get out of. There are "enemies," but when I killed one it just deformed into some kind of blob. Then I walked past it, turned around, and the hallway had completely changed, taking the blob with it. Like so much of generative AI or the blockchain boom before it, I can imagine the "Well, it's just a WIP, first step type of thing" defense of what I was subjected to, but I'm just not convinced. Whatever specific compelling use cases may exist for generative AI tools, that's not what we've been aggressively sold and marketed for the past two years and counting, this insistence of cramming it into everything. Google Gemini is now constantly asking if I want its help writing, like some kind of horrible, latter-day Clippy. Forced mass-adoption of this stuff by consumers is here, now, demanding our approval, attention, and precious time. A public tech demo exists to impress, and the Copilot Gaming Experience does not. Doom on a calculator, but we had to boil a lake or two to get it and are being told it's the future of games. I reject this future. Not only do I find it philosophically and ethically repugnant, it also made my tummy hurt.
[10]
Microsoft unveils Quake 2 "inspired" AI-created demo, but it's practically unplayable
Microsoft has created a playable demo "inspired" by Quake 2 using its new AI tool, but it's practically unplayable. Back in February, Microsoft unveiled its Muse tool that uses generative AI to aid "gameplay ideation". Now we can see it in action with something that looks like Quake 2, in a tech demo to show how Microsoft's AI tools can "simulate interactive gameplay". Hosted in Copilot Labs and playable in a web browser, the demo "dynamically generates gameplay sequences inspired by the classic game Quake 2", according to a Q&A. "Every input you make triggers the next AI-generated moment in the game, almost as if you were playing the original Quake 2 running on a traditional game engine." To be clear, then, this is not running the game using id's engine. Instead, Microsoft's Human Action Model (WHAM) uses generative AI to dynamically predict the next action in the game based on analysing player data. The results, though, are pretty appalling. It may resemble Quake 2, but the visuals (and enemies in particular) are blurry, the controls lag, and the frame-by-frame gameplay is enough to give you a headache. Some of these limitations have been acknowledged by Microsoft, but worse of all is the context length at just 0.9 seconds of gameplay (which amounts to 9 frames at 10fps). Practically, that means the AI doesn't remember objects or level layouts, meaning if you look to the floor or ceiling and back again the view will have completely changed. Is it interesting as an example of what AI can do now? Sure. Is it a playable demo? Barely. Geoff Keighley shared a video of the demo on social media, which received plenty of ire in the replies. "You can play Quake on a calculator bro why are you doing this in the most resource intensive way possible," reads one response. "I don't know what this shit is but it ain't Quake," reads another. Other responses have criticised Keighley himself for sharing the video, following his words at last year's The Game Awards responding to layoffs across the industry. "This feels extremely disconnected from what you said just four months ago about the insane and devastating developer layoffs," reads a reply. Last year, Google also unveiled a similar tool that can generate a playable 3D world. AI is changing video game development, it's clear, but even in the past year more of these supposed advancements have been released.
[11]
Microsoft's 100% AI-generated Quake 2 made us nauseous but John Carmack, the game's OG coder, loves it: 'What? This is impressive research work!'
The research, folks, the research. Absolutely not the frame rate. I hope. Over the weekend, Microsoft released a technology demonstration from its AI Copilot research labs, showcasing a generative AI creating Quake 2 from scratch. Or something resembling it, at least, as the original game never made us feel as nauseous as this one does. Still, who are we to complain when John Carmack, the lead programmer behind Id Software's seminal game, was genuinely impressed by it. To be fair, I don't think he was referring to the demo's graphics or performance, as he specifically said on X, "This is impressive research work!" in response to someone heavily criticizing it. The research in question was published in the science journal Nature and to someone like me, who got into 3D graphics programming on PCs because of the likes of Quake 2, it reads like some ancient alien script, carved into a mysterious substance. I'm certainly not anywhere near experienced enough in AI programming to judge the relative merits of the work of a large group of professionals. Arguably, Carmack is qualified so if he says it's impressive, then I'm certainly in no position to disagree. Mind you, he has a vested interest in AI, starting an AGI (artificial general intelligence) company called Keen Technologies, back in 2022. But while the research surely is top-notch, I can't help but feel the end results very much aren't. Yes, this is a very early tech demo and over recent years, we've all seen generative AI go from churning out utter nonsense to producing startling realistic and accurate audio and video. But it's not the snail-like frame rate or ghostly rendering that I have an issue with. I'm not bothered by the fact that the demo struggles to maintain a comprehensive grasp of a level in Quake, a game that's almost 30 years old. For me, the problem is what it's taken to generate the very short but 'playable' demo. From the research paper itself: "We extracted two datasets, 7 Maps and Skygarden, from the data provided to us by Ninja Theory. The 7 Maps dataset comprised 60,986 matches, yielding approximately 500,000 individual player trajectories, totalling 27.89 TiB on disk. This amounted to more than 7 years of gameplay.." The original Quake 2 was created by a handful of people -- a few designers, programmers, and artists. They didn't need 28 TB worth of gaming data to do this, just their own ingenuity, creativity, and knowledge. They didn't require hugely expensive GPU servers, requiring many kW of power, to render the graphics they generated. I have no problem with research just for the sake of research (as long as it's legal, and morally and ethically sound, of course) and at the end of the day, it's Microsoft that's spent its money on the project, not taxpayers. But if I were a shareholder, I'd be wondering if this is money well spent, especially compared to how much a semi-decent game development team costs to run for the period of time it would take to make a similar Quake 2-on-LSD game. I have no doubt that at some point in the near future, AI will be able to generate something far more impressive and playable, but it's certainly not going to be cheaper -- in terms of computing and electrical power required -- than a group of talented individuals sat in front of a few humble PCs. If that ever comes to pass, then I'll be very, very impressed. But also deeply concerned for the future of game development.
[12]
I Played the AI Version of 'Quake II,' and Here's How It Went
AI tools are suddenly everywhere, and you've probably tried out using an AI program to generate text and images, or perhaps even audio and video. But AI isn't just coming for media you passively experience: Companies are also using interactive entertainment, including video games. We've seen examples of this over the past year or so. Google's Genie model, for example, aims to generate playable video game environments from user prompts. Earlier this year, Microsoft unveiled a similar AI model, called Muse, and that model's first mainstream experience is now here: The company is offering an experimental version of Quake II, a game originally released in 1997, that is powered by AI. I've now played the AI version of Quake, and it's...bizarre, to say the least. It feels like playing a video game in a dream. Not because it's good, but because it's everything about it feels unstable and ephemeral. Other than the purely for the novelty, I'm really not sure why this thing exists. When you first load up the "Copilot Gaming Experience," you agree to a popup warning you the game is rated M and that you are indeed 18 years or older -- even though M-rated games are supposed to be 17 and up. Consider this your first warning that something is off. The game loads in a small window in the center of the screen. Immediately, you can tell it's not quite right. Sure, it's recognizably a video game: It's from a first-person perspective, with an animated hand holding a pistol in the bottom-right. You have a health bar, an ammo counter, and a weapon indicator. You can recognize that you are in a room, with clearly-defined (yet blocky) features. However, something about the entire experience just feels wrong, and the effect worsens as soon as you press a button. There is what feels like a full second delay between you pressing a key on your keyboard and the action taking place on-screen. Lag would be one thing, but that isn't the end of it: When you do move, the environment shifts ever so slightly, like nothing around you is actually stable. For example, there are no enemies when you first start, but they may appears as you move around a room. They don't spawn with any kind of animation or care -- you move a frame, and all of a sudden, an enemy is forged. (More on that instability in a moment.) The game plays a bit like SuperHot, in that when you stop moving, nothing happens. Enemies don't attack, and in fact just freeze up entirely. It's only when you take an action that you'll notice things change on the screen: A monster may move, shape-shift (due to the AI, not a Quake gameplay mechanic) or attack, or perhaps the room itself will change entirely. You can look at the floor, spin around in a circle two or three times, then look up, and find yourself in another corner of the map. The game does try to stay consistent. You always start in the same room, and if you're lucky, you'll be able to maintain the integrity of the map as you run between areas -- hopefully without being shot at by a newly-generated blurry enemy. But as you move and look around, you can almost feel the ground shifting beneath you, as if the game world could shape into something else at a moments notice. This, really, is what gives it the feeling of being in a dream. Sometimes, you run into a room and encounter an enemy, firing away. But if you run past it, then turn around, it'll be gone. You don't even need it to exit your field of vision: By just strafing from left to right to "dodge" its attacks, I watched one enemy "fall apart" onto the floor, as if I was using the moving ground to erase the monster. You don't have the same problem, as it's impossible to die. When you are attacked, your health counter appears to track downwards at a reasonable rate. However, you'll notice the numbers start to bounce around at random. Even if your health hits zero, your character won't collapse, nor will you get a "Game Over" screen. The counter will simply bounce around single digits as you take damage forever, or until you kill the monster (or let the AI decide it's time to disappear the character entirely). The game can simply break on a moment's notice. On one run, I was about to enter hallway, decided to do a 360-scan of the room before I did, and when I made it all the way around, the hallway had turned into an elevator. When I walked up to the elevator button, I triggered a loading screen that never finished loading. Another time, I looked down, spun around, and looked back up to find myself in another room entirely. As I stepped around, I was suddenly losing health, but there was no attacker in sight -- at least, not one I could see. Just as I was about to fire around the room at random, the game paused and never came back to, similar to the event in the elevator. If the game doesn't freeze on its own, the site will time you out, forcing you to start a new game if you want to continue playing. The game is powered by Microsoft's WHAM model, which the company says was trained on human gameplay data to guess what should be displayed frame-by-frame, as well as what the player might control next. In its research paper, Microsoft enthusiastically explains what's so great about an AI model that can generate video game environments. I'm not so sure I agree. For one, this Quake experience isn't fun. There's no objective or challenge, other than to see how long you can go without breaking the experience. You can kill enemies, sure, but they can't kill you. In fact, the AI generation may get to the enemy before they can get to you, if you simply move the wrong way. With time, I'm sure Microsoft will present a more polished version of this experience. Perhaps there will be more consistency as you move around the map; perhaps enemies will actually stand their ground, and be capable of ending your game. But even if we get to the point where AI spits out an entirely playable experience, why would we want that? Who wants to effectively click a randomizer button and play whatever amalgamation of real video games the computer program concocts for us? That's not to say there aren't potential, practical uses for AI in game development -- but we should be looking at those as tools to add to a game developer's workflow, and not something that makes that developer obsolete. If this is the current state of AI-generated video games, however, flesh and blood game probably developers aren't going anywhere anytime soon.
[13]
Microsoft Used AI to Recreate Quake II
On April 4, Microsoft unveiled an interactive gameplay demo in Copilot Labs. It is an AI rendition of Quake II powered by Muse, its generative AI model for video game visuals. Quake II is a first-person shooter (FPS) game published in 1997 by Activision Blizzard, which is now owned by Microsoft. To make this possible, Microsoft introduced a real-time playable extension of their previous model, i.e, World and Human Action MaskGIT Model (WHAMM). The process involved training WHAMM on approximately one week of curated gameplay data from Quake II, a significant reduction from the seven years of data used for its older model to generate similar results. This focused data collection, aided by professional game testers, allowed for training on a single level with intentional gameplay. The blog post explained the experience in brief, saying, "Much to our initial delight, we were able to play inside the world that the model was simulating. We could wander around, move the camera, jump, crouch, shoot, and even blow-up barrels similar to the original game." It added, "Additionally, since it features in our data, we can also discover some of the secrets hidden in this level of Quake II." Microsoft stated some of its limitations, including that enemy interactions are often fuzzy and combat can be inaccurate. The short context length (0.9 seconds) leads to the model "forgetting" objects out of view, causing inconsistencies. Counting, particularly health values, is unreliable. The experience is limited to a single part of one Quake II level. Finally, making WHAMM widely available introduces noticeable latency. Microsoft emphasises that this is a research exploration to understand the potential of AI in creating new interactive media.
[14]
Play Quake II generated by AI: Microsoft's Copilot Gaming demo
Microsoft offers a playable, AI-generated tech demo of the classic game Quake II. This demonstration utilizes Microsoft's new Muse AI model, initially unveiled as part of the company's foray into the Xbox AI era earlier this year. While initially presented as a Microsoft Research project, the tech giant is now allowing users of its Copilot service to experience Muse firsthand through this unique gaming application. This intriguing tech demo is a key component of Microsoft's "Copilot for Gaming" initiative. It presents an AI-generated rendition of a Quake II level that is directly playable within a web browser. While the current iteration features a simplified environment with somewhat blurry visuals for enemies and interactions, and access is currently time-limited, it offers a fascinating glimpse into the potential of AI in game development and preservation. The performance of this AI-generated Quake II demo represents a significant step forward for the Muse model. Initially showcased at a modest 10 frames per second and a 300 x 180 resolution, this latest demonstration achieves a playable frame rate at a slightly improved 640 x 360 resolution. Despite its current limitations, it provides a tangible preview of future possibilities within the gaming landscape. Microsoft continues to position its Muse AI model as a valuable tool for game developers, particularly in the early stages of prototyping new games. During Muse's initial announcement in February, Microsoft also highlighted its potential in revitalizing classic games like Quake II and making them accessible on modern hardware. "You could imagine a world where from gameplay data and video that a model could learn old games and really make them portable to any platform where these models could run," stated Microsoft Gaming CEO Phil Spencer in February, emphasizing the potential for game preservation. "We've talked about game preservation as an activity for us, and these models and their ability to learn completely how a game plays without the necessity of the original engine running on the original hardware opens up a ton of opportunity." This Quake II demo indicates that Microsoft is actively training the Muse AI on a broader range of games beyond its initial demonstrations with Bleeding Edge. It is anticipated that Microsoft will release more of these short, interactive AI-powered game experiences through Copilot Labs in the near future. Furthermore, Microsoft is actively developing Copilot into a comprehensive in-game assistant, capable of providing tips and guides based on real-time gameplay analysis through Copilot Vision, which will soon be available to Windows Insiders.
[15]
Xbox generative AI creates playable version of Quake 2
TL;DR: Microsoft has released a playable version of Quake II created by its generative AI, Muse, which generates gameplay on-the-fly. Microsoft has released a playable version of Quake II that was created entirely by its generative AI technology. A bit ago, Microsoft announced Muse generative AI, technology that could create gameplay on-the-fly. Now the tech has been updated and is able to push 10 FPS thanks to training data from an old-school PC classic: Quake II. Anyone can play an AI-generated, very janky and fever dream-like version of Quake II via Copilot Labs right in their browser. Be forewarned: There's a time limit, and there's not a lot of consistency. Don't expect this to be like an emulator. For Quake II, Microsoft didn't collect data from players. Instead, they used professional QA game testers to help train the AI. Microsoft will continue iterating on its Muse AI technology over time, but developers still have a choice whether or not they want to use AI in their games. A research demo at the intersection of gaming and AI. Welcome to an experimental, AI-powered gameplay experience hosted in Copilot Labs. Powered by the Muse World and Human Action Model (WHAM) built by Microsoft Research, this tech demo offers an early look at how generative AI can simulate interactive gameplay. In this real-time tech demo, Copilot dynamically generates gameplay sequences inspired by the classic game Quake II. Every input you make triggers the next AI-generated moment in the game, almost as if you were playing the original Quake II running on a traditional game engine. Enjoy the experience, share your thoughts, and help shape the future of AI-powered gameplay experiences. This was made possible by working with professional game testers to collect the data, and by focusing on a single level with intentional gameplay ensuring we collected enough high quality and diverse data. What games are available in the initial experience in Copilot Labs? The initial experience on Copilot Labs is powered by a Copilot Gaming Experiences model trained on gameplay from Quake II.
[16]
Microsoft Is Letting You Play an AI-Generated Game Demo of Quake II
Microsoft released an interactive real-time gameplay experience of Quake II in Copilot Labs last week. To build the artificial intelligence (AI) gameplay, the Redmond-based tech giant used its recently released Muse AI models and a new approach dubbed World and Human Action MaskGIT Model (WHAMM). The game demo is currently available as a research preview to everyone, and it comes with the world generation of the game and all the usual mechanics. Microsoft also listed several limitations in the gameplay of the AI-generated experience. In a blog post, Microsoft researchers detailed the AI-generated gameplay and how they were able to build it. AI-powered 2D and 3D game generation has been an active area of interest for researchers as it tests the capability of the technology to generate real-time world environments and adjust it for different mechanics used by a human user. It is said to be a good way to see if AI models can be trained to take on real-world tasks by controlling robots as physical AI. Notably, Quake II is a 1997 first-person shooter published by Microsoft-owned Activision. It is a 3D forward-scrolling level-based game with a diverse range of mechanics, including jumping, crouching, shooting, environment destruction, and camera movements. The game is available via Copilot Labs, and users can currently experience a single level for about two minutes using a controller or the keyboard. Coming to the development process, the researchers said that they used Muse AI models and the World and Human Action Model (WHAM) to use the new WHAMM approach. WHAMM is the successor to WHAM-1.6B, and can generate more than 10 frames per second, enabling real-time video generation. The gameplay's resolution output has been kept at 640×360 pixels. Microsoft says one of the key improvements in WHAMM's speed came from using the MaskGIT Mask Generative Image Transformer) setup instead of WHAM-1.6B, as the frame rates went from one frame per second to 10+. MaskGIT setup allowed the researchers to generate all of the tokens for an image in a limited number of forward passes. With this, the AI model can produce predictions of each of the possible moves of a single masked image in real time, allowing for a smoother experience. While the core gameplay is quite similar to the original game, Microsoft also listed several limitations with the current demo. Since the game environment is generated using AI, it is merely an approximation of the real world and not an identical replication. Enemy interaction sometimes leads to fuzzy image generations, and the combat can be incorrect. WHAMM currently has a context window of 0.9 seconds (9 frames at 10fps). As a result, the model forgets about objects that go out of view for longer than this. Microsoft says this can give rise to scenarios where a user turns around and finds an entirely new area or looks a the sky and back down to find themselves be moved to a different part of the map. Further, the game also has a significant latency due to it being made available to everyone.
[17]
Microsoft's AI-powered Quake 2 demo makes me sick, not just because it's wrong on every level, but because I literally felt queasy playing it
If this is "a glimpse into next-generation AI gaming experiences," I'd hate to see the whole thing We all know generative AI in video games is bad. It displaces human jobs, represents the very antithesis of art as a means of creative self expression, and is horrible for the environment. But even if you can get past the immense ethical problems, Microsoft's new AI-powered Quake 2 demo, part of its broader Copilot Gaming Experiences program, proves the technology just isn't there purely from a gameplay perspective. I don't know what I expected when I booted up this browser-based, uh, experience, which Microsoft says "offers an early look at how generative AI can simulate interactive gameplay," but what I didn't expect were gently amorphous environments, enemies that literally fell apart when I walked past them, and WASD controls as the only input option. I still don't know if it's the god-awful frame rate or the aforementioned shapeless environments, but I was almost instantly nauseated playing the thing. Yeah, I don't know what else to say here. This thing sucks. Not just from an ethical standpoint, from which it sucks majorly, but as a tech demo, as an experience, as a game. I don't want to ever play anything like that ever again. "By generating gameplay in real time, the underlying Muse shows how classic games like Quake II can be reimagined through modern AI techniques," Microsoft boasts, but if this indeed offers a peak behind the curtain at the potential for AI-generated video games, then that curtain needs to be boarded up with two-by-fours, locked up, and carefully guarded like it's shielding the world from a zombie outbreak. Just no, Microsoft. No on every level. Bad Microsoft.
[18]
You Can Now Play Microsoft's AI-Generated Version of Quake II in Your Browser
In February 2025, Microsoft unveiled an AI model called 'Muse', which can generate gaming visuals and controller actions as you continue to play the game. Microsoft stated that Muse is a World and Human Action Model (WHAM) that leverages the power of AI to generate entirely new sequences of gameplay. It uses a Transformer-based model. After scaling the Muse model on 1 million training updates, Muse learnt basic movements, geometry, correct interaction, flying mechanics, and more. Now, finally, Microsoft has launched Copilot Gaming Experiences, as part of the Copilot Labs program, where the company demonstrates experimental AI demos and experiences. Microsoft has taken the classic Quake II game and used the Muse AI model to generate gameplay in real time. It dynamically generates the gameplay sequences, and after every user input, a new AI-generated moment is created instantly. Best of all, it's available freely for anyone to play the AI-generated version of Quake II in a web browser by clicking this link. I played Microsoft's AI version of Quake II, and it was decent as an early proof of concept. You can move around with WASD keyboard controls and perform actions as well. However, there is no mouse support, which is a bummer. The graphics quality is not great right now, but this is an early demo, and with future improvements, Generative AI may deliver great results in gaming, as we have seen with Nvidia's multi-frame generation in DLSS 4. Having said that, there is a short time limit (about 2 minutes) that prevents users from fully exploring the AI-generated game. During my gameplay, I couldn't find any goals or objectives, and most interestingly, objects and enemies kept disappearing. So yes, there is no consistency in the AI version of Quake II. For what it's worth, Google DeepMind also demonstrated a similar AI technology for gaming in 2024. GameNGen is Google's game engine, which is powered by an AI neural model (Diffusion-based) that allows real-time interaction in a complex environment. Google showed that GameNGen can simulate the classic DOOM game at 20 FPS on a single TPU. Next, Google's Genie 2 is a large foundation world model (Diffusion-based) that can generate diverse gaming environments and can respond to keyboard controls. It can also handle gravity, reflections, smoke, lighting -- nearly all Physics concepts. Now, with the early advent of AI in gaming, we need to see what the future holds for the gaming industry.
[19]
Fans mock Microsoft's unplayable AI Quake II remake in Copilot
The use of AI to make video games is already causing controversy, and Microsoft knows that. Minecraft, which it owns the rights to, was copied for the AI-generated game Oasis last year. But now Microsoft is joining in with an experiment of its own: starting with an AI remake of Quake II in Copilot. The 'Copilot Gaming Experience' is an AI-made imitation of Quake II made using Microsoft's World and Human Action MaskGIT Model (WHAMM) model. And despite the purported technical advances, it quickly falls apart if you try to play it (see our guide to the best game development software if you want to make a game the traditional way). In its research blog, Microsoft says WHAMM can generate "consistent and diverse gameplay sequences and persist user modifications" while "every frame is created on the fly by an AI world model." But while the AI-generated 'gaming experience' may look like Quake II initially, the functionality isn't there. And as for that consistency,... erm. Look down and then up again and you find you're in a completely different place. Quake II fans are not impressed. "Microsoft will literally do anything but develop real video games," one person wrote on X. "This is just a regurgitation of quake 2. A game that can be played on almost anything (far better). This isn't a new experience or preservation. It's just bastardization." another fan wrote. WHAMM is part of Microsoft's Muse family of world models for video games. To be fair, the AI Quake II is not intended as a final product but as a research project and a demonstration of the tech's potential. The company recognises that WHAMM cannot "fully replicate the actual experience of playing the original Quake II game" and that the project is "intended to be a research exploration of what we are able to build using current ML approaches". It says it wants to highlight how "future models could be improved, enabling new kinds of interactive experiences and empowering game creators to bring to life the stories they wish to tell." But it's hardly selling the potential for all-out AI-generated games when the inconsistencies and hallucinations make them unplayable interpretations of video recordings (Microsoft doesn't clarify if it trained its model on real Quake II footage, but it looks likely). "You can play Quake on a calulator. Why are you doing this in the most resource intensive way possible?" one person commenting on X wonders. Meanwhile, the game studio Team Kill Media doesn't see AI as taking over from developers yet. It wrote on x: "AI tools can be a great thing to help people be more productive if used correctly and can even help with learning, but I do not believe this will ever work like this nor should it. "The concept of producing a game solely with generative AI is terrible. Human touch, artistry, direction, imagination and vision needs to always be present, otherwise it will just be soulless imitation." You can 'play' the AI Quake II via Copilot. What do you think? A valid exploration of state-of-the-art AI or pointless waste of time? Let us know in the comments.
[20]
Microsoft's Quake 2 AI Prototype Sparks Debate Online - IGN
"I had a better experience literally just imagining the game in my head." Microsoft has created a playable "interactive space inspired" by Quake II using AI, and it's sparked a vociferous debate online. As spotted by PC Gamer, the demo is powered by Microsoft's recently announced Muse and the World and Human Action Model (WHAM) AI system, and "can dynamically create gameplay visuals and simulate player behavior in real-time," which means a semi-playable environment has been generated entirely through AI and without an in-game engine. "In this real-time tech demo, Copilot dynamically generates gameplay sequences inspired by the classic game Quake II," Microsoft explained. "Every input you make triggers the next AI-generated moment in the game, almost as if you were playing the original Quake II running on a traditional game engine. Enjoy the experience, share your thoughts, and help shape the future of AI-powered gameplay experiences. "This bite-sized demo pulls you into an interactive space inspired by Quake II, where AI crafts immersive visuals and responsive action on the fly. It's a groundbreaking glimpse at a brand new way of interacting with games, turning cutting-edge research into a quick and compelling playable demo." All that sounds pretty impressive, but the demo itself is... well, less so. After The Game Awards boss Geoff Keighley shared a brief video of the demo in action on X / Twitter, hundreds of people responded, with few having anything positive to say. "Man, I don't want the future of games to be AI-generated slop," said one Redditor. "There will be a point where it will be easier to use AI, and then all the greedy studios will do it exclusively. The human element will be removed. "And the worst part is gamers will buy it. They buy skins for 100 dollars. They will buy whatever you sell them." "Microsoft's boast that they want 'to build a whole catalog of games that use this new AI model,' despite it not being clear if the current technique will ever even be capable of letting you turn around without moving to a random point on the map let alone come up with an original game, really typifies what's wrong with AI and the tech industry," added another. "I don't know why everything has to be doom and gloom," said a more cheery respondent. "It's a demo for a reason. It shows the future possibilities. Having an AI that is able to create a coherent and consistent world is crazy. But this cannot be used to create a full game or anything enjoyable. You cannot play this. Seems like a tool for early concept/pitching phase. This can also bring improvement in other fields in AI as what it is doing is impressive. "This is not even a product yet but a demo showing how much they've improved from just a few months ago." Epic Games boss Tim Sweeney had a rather different response: Generative AI is one of the hottest topics within the video game and entertainment industries, which have both suffered massive layoffs in recent years. It has drawn criticism from players and creators due to a mix of ethical issues, rights issues, and AI's struggles to produce content audiences actually enjoy. For instance, Keywords Studios attempted to create an experimental game internally using entirely AI. The game failed, with Keywords citing to investors that AI was "unable to replace talent." Still, that hasn't put off a number of video game companies from using generative AI in the development of their products. Activision recently disclosed the use of generative AI for some Call of Duty: Black Ops 6 assets as part of new requirements on Steam, amid a backlash to an "AI slop" zombie Santa loading screen. And last month, Horizon actor Ashly Burch addressed a controversial AI Aloy video that leaked online, using it to call attention to the demands of striking voice actors.
[21]
Microsoft presents an AI capable of generating Quake II scenarios in real time and it is terrible - Softonic
You need a top-of-the-line computer to play a worse version of a game that runs your microwave Microsoft has presented an interactive demo inspired by Quake II, which uses a generative artificial intelligence model to create visuals and actions in real time. Under the name of Copilot Gaming Experience, the demonstration aims to offer a new way to interact with video games. However, the reality of the execution leaves much to be desired. Despite the billions of dollars invested in its research and development, the visual quality is low and presents performance issues, raising doubts about its viability. The technology, known as the World and Human Action Model, aims to simulate player behaviors and create dynamic environments. Without defined levels or goals, the game environment results in constant change that can disorient players. Several users have experienced physical discomfort, such as dizziness, in this demo, raising concerns about player well-being when using this type of technology. The analyst also points out that the exorbitant spending on the development of these types of projects raises ethical questions about the direction the video game industry is taking. Despite the boldness involved in integrating artificial intelligence into video games, it does not seem to have met the quality expectations that modern players demand. Comparing the current situation with Google's failed streaming platform, Stadia, the question arises whether technological demonstrations, no matter how impressive, really provide anything to the consumer. With the current wave of AI, some are questioning whether the industry is rushing to adopt technologies that are not yet up to the promises made, leaving players dissatisfied in the processes. Undoubtedly, the Copilot Gaming Experience has left a bitter taste, prompting reflection on the future of video games and the role of technology in their evolution.
Share
Share
Copy Link
Microsoft releases a browser-based, AI-generated version of Quake II as a tech demo for its Muse AI model, showcasing potential applications in game development and preservation while highlighting current limitations.
Microsoft has unveiled a groundbreaking tech demo featuring an AI-generated version of the classic first-person shooter Quake II, playable directly in web browsers. This demonstration showcases the capabilities of Microsoft's Muse AI model, part of their Copilot for Gaming initiative 1.
The demo is powered by WHAMM (World and Human Action MaskGIT Model), an update to Microsoft's earlier WHAM-1.6B model. WHAMM uses a MaskGIT-style setup that allows for parallel token generation, significantly reducing the time required for visual output 3.
The AI-generated Quake II runs at a resolution of 640 x 360, an improvement from the earlier 300 x 180 resolution. While the frame rate is still limited, hovering in the low to mid-teens, it represents a significant advancement in real-time AI-generated gaming 2.
Players can interact with the AI-generated world using keyboard controls, performing actions such as moving, jumping, crouching, and shooting. However, Microsoft researchers emphasize that this should be considered "playing the model" rather than playing the game itself 1.
The demo exhibits several limitations, including:
Microsoft Gaming CEO Phil Spencer has suggested that AI models like Muse could aid in game preservation, potentially making classic games portable to any platform without requiring the original engine or hardware 4.
However, some industry experts, like writer and game designer Austin Walker, argue that this approach may misunderstand the essence of what makes games engaging, particularly the unpredictable edge cases that arise from specific code and design choices 1.
While the current demo is limited in scope and quality, it represents a significant step in AI-generated gaming. Microsoft is likely to continue training Muse on more games and expanding its capabilities 2.
Other tech giants and startups are also exploring AI in gaming, with Google showcasing an AI-generated Doom simulation and Virtual Protocols demonstrating a text-to-video-powered version of Super Mario 4.
As AI continues to evolve, it may find its sweet spot in enhancing rather than replacing creative works, as seen in technologies like Nvidia's ACE for powering lifelike NPCs 3.
Reference
[3]
Artificial intelligence has successfully recreated the iconic game DOOM, marking a significant milestone in AI-driven game development. This achievement showcases the potential of AI in creating playable game environments without traditional coding.
5 Sources
5 Sources
Google researchers have achieved a significant milestone in AI technology by creating a model that can simulate the classic game DOOM in real-time, without using a traditional game engine. This breakthrough demonstrates the potential of AI in game development and simulation.
7 Sources
7 Sources
Microsoft introduces Muse, a generative AI model capable of simulating video game gameplay, potentially revolutionizing game development and preservation while raising concerns about its impact on the industry.
50 Sources
50 Sources
Etched and Decart unveil Oasis, an AI-powered Minecraft-like game that generates gameplay in real-time, sparking discussions about the future of AI in gaming and its implications.
9 Sources
9 Sources
As generative AI makes its way into video game development, industry leaders and developers share their perspectives on its potential impact, benefits, and challenges for the future of gaming.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved