23 Sources
23 Sources
[1]
Google Project Genie lets you create interactive worlds from a photo or prompt
Last year, Google showed off Genie 3, an updated version of its AI world model with impressive long-term memory that allowed it to create interactive worlds from a simple text prompt. At the time, Google only provided Genie to a small group of trusted testers. Now, it's available more widely as Project Genie, but only for those paying for Google's most expensive AI subscription. World models are exactly what they sound like -- an AI that generates a dynamic environment on the fly. They're not technically 3D worlds, though. World models like Genie 3 create a video that responds to your control inputs, allowing you to explore the simulation as if it were a real virtual world. Genie 3 was a breakthrough in world models because it could remember details of the world it was creating for a much longer time. But in this context, a "long time" is a couple of minutes. Project Genie is essentially a cleaned-up version of Genie 3, which plugs into updated AI models like Nano Banana Pro and Gemini 3. Google has a number of pre-built worlds available in Project Genie, but it's the ability to create new things that makes it interesting. You can provide an image for reference or simply tell Genie what you want from the environment and the character. The system first generates a still image, and from that you can generate the world. This is what Google calls "world sketching." If you don't like the reference image created by Nano Banana Pro, you can make changes before handing it off to Genie. The resulting video is 720p, rendering at around 24 frames per second. As you move your character around with WASD, Genie renders the path ahead in something approaching real time. If that 60-second jaunt into the AI world isn't enough, you can just run the prompt again. Because this is generative AI, the results will be a little different each time. Google also lets you "remix" its pre-built worlds with new characters and visual styles. The video generated of your exploration is available for download as well. Still an experiment Google stresses that Project Genie is still just a research prototype, and there are, therefore, some notable limitations. As anyone who has used Google Veo or OpenAI Sora to create AI videos will know, it takes a few seconds to create even a short clip. So, it's impressive that Genie can make it feel interactive at all. However, there will be some input lag, and you can only explore each world for 60 seconds. In addition, the promotable events feature previously demoed for Genie 3, which allows inserting new elements into a running simulation, is not available yet. While Google has talked up Genie's ability to accurately model physics, the company notes that testers will probably see examples of worlds that don't look or behave quite right. Testers may also see changing restrictions on content. The Verge was able to test Project Genie, and initially, it was happy to generate knock-offs of Nintendo games like Super Mario and The Legend of Zelda. By the end of the test, The Verge reports that some of those prompts were being blocked due to "interests of third-party content providers." Project Genie is only accessible from a dedicated web app -- it won't be plugged into the Gemini app or website. You can only access this tool for the time being with an AI Ultra subscription, which runs $250 per month. Generating all this AI video is expensive, so it makes sense to start with the higher tier. Google says its goal is to open up access to Project Genie over time.
[2]
I built marshmallow castles in Google's new AI world generator
Google DeepMind is opening up access to Project Genie, its AI tool for creating interactive game worlds from text prompts or images. Starting Thursday, Google AI Ultra subscribers in the U.S. can play around with the experimental research prototype, which is powered by a combination of Google's latest world model Genie 3, its image generation model Nano Banana Pro, and Gemini. Coming five months after Genie 3's research preview, the move is part of a broader push to gather user feedback and training data as DeepMind races to develop more capable world models. World models are AI systems that generate an internal representation of an environment, and can be used to predict future outcomes and plan actions. Many AI leaders, including those at DeepMind, believe world models are a crucial step to achieving artificial general intelligence (AGI). But in the nearer term, labs like DeepMind envision a go-to-market plan that starts with video games and other forms of entertainment and branches out into training embodied agents (aka robots) in simulation. DeepMind's release of Project Genie comes as the world model race is beginning to heat up. Fei-Fei Li's World Labs late last year released its first commercial product called Marble. Runway, the AI video generation startup, has also launched a world model recently. And former Meta chief scientist Yann LeCun's startup AMI Labs will also focus on developing world models. "I think it's exciting to be in a place where we can have more people access it and give us feedback," Shlomi Fruchter, a research director at DeepMind, told TechCrunch via video interview, smiling ear-to-ear in clear excitement over Project Genie's release. DeepMind researchers that TechCrunch spoke to were upfront about the tool's experimental nature. It can be inconsistent, sometimes impressively generating playable worlds, other times producing baffling results that miss the mark. Here's how it works. You start with a "world sketch" by providing text prompts for both the environment and a main character, whom you will later be able to maneuver through the world in either first or third person view. Nano Banana Pro creates an image based on the prompts that you can, in theory, modify before Genie uses the image as a jumping off point for an interactive world. The modifications mostly worked, but the model occasionally stumbled and would give you purple hair when you asked for green. You can also use real life photos as a baseline for the model to build a world on, which, again, was hit or miss. (More on that later.) Once you're satisfied with the image, it takes a few seconds for Project Genie to create an explorable world. You can also remix existing worlds into new interpretations by building on top of their prompts, or explore curated worlds in the gallery or via the randomizer tool for inspiration. You can then download videos of the world you just explored. DeepMind is only granting 60 seconds of world generation and navigation at the moment, in part due to the budget and compute constraints. Because Genie 3 is an auto-regressive model, it takes a lot of dedicated compute - which puts a tight ceiling on how much DeepMind is able to provide to users. "The reason we limit it to 60 seconds is because we wanted to bring it to more users," Fruchter said. "Basically when you're using it, there's a chip somewhere that's only yours and it's being dedicated to your session." He added that extending it beyond 60 seconds would diminish the incremental value of the testing. "The environments are interesting, but at some point, because of their level of interaction and the dynamism of the environment is somewhat limited. Still, we see that as a limitation we hope to improve on." Whimsy works, realism doesn't When I used the model, the safety guardrails were already up and running. I couldn't generate anything resembling nudity, nor could I generate worlds that even remotely sniffed of Disney or other copyrighted material. (In December, Disney hit Google with a cease-and-desist, accusing the firm's AI models of copyright infringement by training on Disney's characters and IP and generating unauthorized content, among other things.) I couldn't even get Genie to generate worlds of mermaids exploring underwater fantasy lands or ice queens in their wintery castles. Still, the demo was deeply impressive. The first world I built was an attempt to live out a small childhood fantasy, in which I could explore a castle in the clouds made up of marshmallows with a chocolate sauce river and trees made of candy. (Yes, I was a chubby kid.) I asked the model to do it in claymation style, and it delivered a whimsical world that childhood me would have eaten up, the castle's pastel-and-white colored spires and turrets looking puffy and tasty enough to rip off a chunk and dunk it into the chocolate moat. (Video above.) That said, Project Genie still has some kinks to work out. The models excelled at creating worlds based on artistic prompts, like using watercolors, anime style or classic cartoon aesthetics. But it tended to fail when it came to photorealistic or cinematic worlds, often coming out looking like a video game rather than real people in a real setting. It also didn't always respond well when given real photos to work with. When I gave it a photo of my office and asked it to create a world based on the photo exactly as it was, it gave me a world that had some of the same furnishings of my office - a wooden desk, plants, a grey couch - laid out differently. And it looked sterile, digital, not lifelike. When I fed it a photo of my desk with a stuffed toy, Project Genie animated the toy navigating the space, and even had other objects occasionally react as it moved past them. That interactivity is something DeepMind is working on improving. There were several occasions when my characters walked right through walls or other solid objects. When DeepMind released Genie 3 initially, researchers highlighted how the model's auto-regressive architecture meant that it could remember what it had generated, so I wanted to test that by returning to parts of the environment it generated already to see if it would be the same. For the most part, the model succeeded. In one case, I generated a cat exploring yet another desk, and only once when I turned back to the right side of the desk did the model generate a second mug. The part I found most frustrating was the way you navigated the space using the arrows to look around, the spacebar to jump or ascend, and the W-A-S-D keys to move. I'm not a gamer, so this didn't come naturally to me, but the keys were often non-responsive, or they sent you in the wrong direction. Trying to walk from one side of the room to a doorway on the other side often became a chaotic zigzagging exercise, like trying to steer a shopping cart with a broken wheel. Fruchter assured me that his team was aware of these shortcomings, reminding me again that Project Genie is an experimental prototype. In the future, he said, the team hopes to enhance the realism and improve interaction capabilities, including giving users more control over actions and environments. "We don't think about [Project Genie] as an end-to-end product that people can go back to everyday, but we think there is already a glimpse of something that's interesting and unique and can't be done in another way," he said.
[3]
Google's AI helped me make bad Nintendo knockoffs
It was all possible thanks to Project Genie, an experimental research prototype that Google gave me access to this week, though I don't think I'm using it in exactly the way Google intended. Google DeepMind has been putting a lot of effort into building its AI "world" models that can generate virtual interactive spaces with text or images as prompts. The company announced its impressive-looking Genie 3 model last year, but it was only available as "a limited research preview" at the time. Project Genie, which will be rolling out to Google AI Ultra subscribers in the US starting today, will be the first opportunity for more people to actually try out what Genie 3 is capable of.
[4]
Google's New AI Tool Lets You Build Virtual Worlds for Training or Just Fun
Google DeepMind's new Project Genie AI tool is something a little different. Instead of answering chatbot prompts more (or even less) effectively than the last model, this one lets you craft an entire 3D world to explore with just a short text prompt. It's designed to make it easier to train AI agents in 3D navigable environments, but for now, the coolest way to use it seems to be making Nintendo knock-offs, as The Verge reports. We first heard about the Genie 3 AI model behind this latest project in August, when Google talked up its new 720p resolution and improved visual consistency. It can hold a world together for a few minutes at a time, allowing you to quickly whip up just about any 3D environment you can imagine. It's not the same as a designed and rendered level from game designers, artists, and programmers, but it offered a window into how we might make video games in the future. While the backlash to even AI concept art in the game design process suggests most people aren't up for playing Slop Simulator-2026, Project Genie is an impressive piece of technology. It's an important step toward more general intelligence AIs and better-functioning physical AIs, like robots, which can leverage world models to learn more about the 3D worlds they interact with. In its announcement, Google also showcases how to build virtual worlds from source images for inspiration. Snapping a picture of a cardboard cutout on an engineering table becomes a little animated cardboard pal wandering around and exploring that same world, only virtually. The same goes for pets, robots, or anything else you can think of. You can see how this could be useful for anyone developing robotics. Quickly develop virtual versions of your test environment to train the robots. Once they've managed that, you can create a more challenging environment for them to try without risking damage to your hardware. If you'd like to try Project Genie, you'll need to be an adult (18+) and have a Google AI Ultra subscription, which costs $250 per month. That's not pocket change for an experimental tool. As AI hardware efficiency improves, perhaps the price will come down in turn, though until anyone is actually making money with AI, I wouldn't hold my breath.
[5]
Google's Project Genie turns prompts into interactive worlds
A Labs prototype turns prompts into short, explorable 3D worlds Google has put the video gaming industry on notice with the rollout of Project Genie, an experimental AI world-model prototype that lets AI Ultra subscribers generate and explore interactive 3D environments from text or image prompts. Google's Genie AI is nothing new (we've been reporting on its existence since 2024), but its appearance in Project Genie, now available to Google AI Ultra subscribers in the US, is. Like many of Google's other experimental efforts, Project Genie is rolling out through Google Labs, where users can generate and explore short interactive environments from text or image prompts. Built on DeepMind's Genie 3 world-model research, the prototype lets users move through AI-generated scenes in real time and regenerate variations using revised prompts, rather than serving as a full game engine or production tool. Youtube Video Demos shown on Google's Project Genie website show various examples, like a cat exploring a living room from the back of a Roomba, a car exploring the surface of a rocky moon, and a person in a wingsuit flying down the side of a mountain. All of the worlds can be navigated in real time, and while they're generated as characters move through them, the worlds are consistent, so backtracking won't result in new areas being generated on top of old ones. Any changes an agent makes to the world will remain, at least for as long as the hardware rendering the world retains space in its memory. "Genie 3 environments are ... 'auto-regressive' - created frame by frame based on the world description and user actions," Google explains on Genie's website. "The environments remain largely consistent for several minutes, with memory recalling changes from specific interactions for up to a minute." After those few minutes, however, things get a bit tricky. "The model can support a few minutes of continuous interaction, rather than extended hours," Google said, noting elsewhere that generation is currently limited to 60 seconds, though it's not clear what happens after time is up. A Google spokesperson told The Register that Genie can create renderings for longer than 60 seconds, but the company "found 60 seconds provides a high quality and consistent world, and it gives people enough time to explore and experience the environment." Google told us that world consistency will last through a whole session. It's not clear if Google will extend session time later. Regardless, that's not Genie's only current limitation. Agents interacting with the generated worlds can only perform a limited range of actions for now, and multiple models in the same world have trouble interacting. Genie also has trouble rendering legible text, can't really simulate real-world locations accurately (it's not like Google has a massive collection of 3D maps of notable locations, after all), and it says that agents sometimes lag and don't respond properly to controls. In addition, "A few of the Genie 3 model capabilities we announced in August, such as promptable events that change the world as you explore it, are not yet included in this prototype," Google added. Still, the company said it expects Genie to offer a path toward automated general intelligence, which it says needs to be able to interact with evolving worlds in order to properly reason. "A world model simulates the dynamics of an environment, predicting how they evolve and how actions affect them," the company said of Genie. "While Google DeepMind has a history of agents for specific environments like Chess or Go, building AGI requires systems that navigate the diversity of the real world." Before it helps Google develop AGI, the company also sees uses for Genie in the gaming industry, something that ought to concern the already-struggling developers in that space. While Genie "is not a game engine and can't create a full game experience," a Google spokesperson told The Register, "we are excited to see the potential to augment the creative process, enhancing ideation, and speeding up prototyping." According to a report from Informa's Game Developers Conference published this week, 33 percent of surveyed US game developers, and 28 percent globally, reported being subject to at least one layoff in the past two years. Half of these game devs also said that their current or most recent employer had conducted layoffs in the past 12 months. In other words, the gaming space is suffering, and many are worried AI will only add to that. Of the game industry professionals surveyed by GDC, 52 percent said they think AI is having a negative impact on the games industry. That's a sharp increase from last year, when 30 percent of respondents said the same. The year before that, just 18 percent expressed negative feelings about generative AI in the games industry. Professionals in visual and technical art, game design and narrative, and programming roles held the most unfavorable views of AI. In the words of one machine learning ops employee in the gaming space, however, an AI like Genie is coming for the gaming industry. "We are intentionally working on a platform that will put all game devs out of work and allow kids to prompt and direct their own content," the GDC study quotes the respondent as saying. It might have its rough edges and limitations now, but given the speed of AI product improvement, Genie could soon make that prediction come true. ®
[6]
Google's Project Genie lets you generate your own interactive worlds
This past summer, Google DeepMind debuted Genie 3. It's what's known as a world world, an AI system capable of generating images and reacting as the user moves through the environment the software is simulating. At the time, DeepMind positioned Genie 3 as a tool for training AI agents. Now, it's making the model available to people outside of Google to try with Project Genie. To start, you'll need Google's $250 per month AI Ultra plan to check out Project Genie. You'll also need to live in the US and be 18 years or older. At launch, Project Genie offers three different modes of interaction: World Sketching, exploration and remixing. The first sees Google's Nano Banana Pro model generating the source image Genie 3 will use to create the world you will later explore. At this stage, you can describe your character, define the camera perspective -- be it first-person, third-person or isometric -- and how you want to explore the world Genie 3 is about to generate. Before you can jump into the model's creation, Nano Banana Pro will "sketch" what you're about to see so you can make tweaks. It's also possible to write your own prompts for worlds others have used Genie to generate. One thing to keep in mind is that Genie 3 is not a game engine. While its outputs can look game-like, and it can simulate physical interactions, there aren't traditional game mechanics here. Generations are also limited to 60 seconds, as is the presentation, which is capped at 24 frames per second and 720p. Still, if you're an AI Ultra subscriber, this is a cool opportunity to see the bleeding edge of what DeepMind has been working over the past couple of years.
[7]
Google's Project Genie Is Not for You
Google has a whole new world for people to play inâ€"but only for a minute. This week, the company released Project Genie, which the company calls its "general-purpose world model" that is capable of generating interactive environments. First unveiled to a small group of invite-only testers back in August of last year, the model, known as Genie 3, is now rolling out to Google AI Ultra subscribers in the US, so you can get your hands on it for the low, low price of $250 per month. The fact that Google is showing off a world model is interesting on its own. Unlike a large language model (LLM), the underlying technology that powers most consumer-facing AI tools including Google's own Gemini, which use the vast amount of training data they are given to predict the most likely next part of a sequence, world models are trained on the dynamics of the real world, including physics and spatial properties, to create a simulation of how physical environments operate. World models are the approach to AI favored by Yann LeCun, the former chief scientist of Meta AI. LeCun believes (probably correctly) that LLMs will never be able to achieve artificial general intelligence, the point at which AI is able to match or exceed human capabilities across all domains. Instead, he believes world models can chart a path to that end goal, and he's recently joined a startup that is going all in on that bet. It's an oversimplification, but the idea is essentially that LLMs can only recognize patterns, whereas world models would allow AI to run tons of simulations to understand how the world works and extrapolate new conclusions. Google playing in this world certainly provides some legitimacy to the idea that world models offer something that LLMs can't, and there is no denying that the preview videos that have come out of the Project Genie's early days are quite visually impressive, albeit short. Google is capping users at generating 60 seconds worth of their world, which the company also says "might not look completely true-to-life or always adhere closely to prompts or images, or real-world physics,"â€"which is to say, it might not work. Outputs are currently 720p videos rendered at 24 frames per second, per Ars Technica, and users have complained at times that it's quite laggy in practice. That's fine for something in beta, though it does speak to the limitations of the company's model, suggesting the world might be smaller than you'd imagine. While users have been hyping up the feature as if it's about to put video game developers out of business, it's probably worth pumping the brakes on that concern for the time being. Google's Genie 3 also takes a different approach to world models than what LeCun has imagined. The model, available through Project Genie, essentially creates a continuous video-based world. Users can navigate that like a video game, but in theory, AI agents could also endlessly run through those worlds to understand how things work. LeCun's idea when he was at Meta was to create Joint Embedding Predictive Architecture (JEPA), which embeds a model of the outside world in an AI agent. But again, the fact that Google is pushing a world model says something. Yes, the company is going to run into all of the same issues that have come from the release of other image and video generation models like OpenAI's Sora 2, which was used to commit all sorts of likely copyright infringement. Early Project Genie outputs are reliably replicating Nintendo worlds, for instance, and that's probably going to cause some issues. But it also suggests that even the biggest players in this AI space recognize that LLMs may eventually hit a wall. That said, there's a reason Google has put a hard cap on Project Genie for the time being. If you think it costs a lot to train and operate a text-based model, just imagine what creating a fully generated simulation of the world requires. It needs tons of high-dimensional data to understand everything from how a world looks to how physics works, and requires lots of processing power to run. That's why, for now, the worlds might look vast but are being kept quite small in practice.
[8]
Google AI lets you explore any virtual world you can dream up instantly
You can not only navigate through immersive worlds, but also interact with objects in them - like how this ball is leaving a trail of paint behind everything it rolls over Google's just begun opening up access to an AI model I can actually get behind. This one lets you generate a virtual world of any kind and travel through it with a vehicle or character like in a video game - all with text prompts or images you upload. That could be a spaceship flying over an alien planet, a blimp over a European city set in the 1950s drawing from a photo, or an tapir running through far reaches of the Amazon rainforest. It's all in a web app called Project Genie, which you can access with a Google Ultra account in the US provided you're above age 18. This app is based on the Genie 3 model that Google showed off back in August 2025, when it was only available to a limited set of testers. This one uses the company's Nano Banana Pro image generation model as well as Gemini to turn your text prompts into immersive experiences. The idea is for you to traverse freely through whatever environment you can dream up, interact with objects in it, and even observe reactive phenomena like the map on a GPS navigation device update as you turn in different directions. These models generate frames for your virtual world on the fly, based on how you make your character move around and adjust the camera. It's neat that you can upload your own images - whether that's a character you've drawn, or a photo of an object in the real world - to use in your experience, and even dictate how some elements will interact with each other. In the promo video above, you can see a compelling example where a blue ball 'paints' everything it rolls over as it makes its way through a field of white grass. There's also a library of worlds you can remix and get started quickly. YouTuber Bilawal Sidhu interviewed Jack Parker-Holder and Diego Rivas from the team behind Project Genie at Google DeepMind, and noted in some live examples how there were occasional bugs from time to time, and the restriction to just 60 seconds of walkthrough time is a major limitation at the moment. That said, there's plenty to be excited about here. The team hasn't yet defined exactly what use cases this will be best suited for, but believe it'll likely come in handy for quickly prototyping video game concepts, visualizing scenes and set pieces for filmmaking, and bringing ideas to life in the classroom. One compelling example in the education space that Parker-Holder and Rivas described was allowing students to get a sense of what working in different professions might be like, such as assisting in disaster recovery. The fact that these applications are all immediately accessible to people without the need for specialized training is quite something. The developers say the Genie model can evolve to allow for more control over generated environments from users' inputs, including actions that your character can take as they explore. They're also looking into enabling worlds to persist for more than 60 seconds, and they'll continue to listen to user feedback to understand what capabilities to invest effort into building next. I haven't gotten a chance to try it from my current base in India yet, but I'm excited to give this a go when it becomes more broadly available, especially to see what elements and interactions it enables on its own when I enter prompts based on real-world spaces. I'm also curious to learn how Google builds guardrails to prevent Project Genie from being misused to generate harmful and inappropriate content, and what systems it'll have in place to prevent copyright infringement. If you're in the US and have a Google AI Ultra subscription, you can check out Project Genie right away.
[9]
Make a video game where you play as your own cat in Google's new virtual world creator
While Project Genie lets you create and explore worlds, it's currently limited to AI Ultra subscribers. There's no denying that generative AI can produce some impressive results, and while it's easy to see that success and imagine some even grander possibilities, actually distilling all that promise down into an easy-to-use, reliable tool can prove particularly challenging. Last year, Google showed off its Genie 3 world-building engine -- give it a prompt, and it could create realistic scenes, ready to explore. And now it's finally your chance to try this magic out for yourself, as Google launches Project Genie.
[10]
Google lets you conjure entire interactive worlds and step into them
Timi is a news and deals writer who's been reporting on technology for over a decade. He loves breaking down complex subjects into easy-to-read pieces that keep you informed. But his recent passion comes from finding the best discounts on the internet on some of the best tech products out right now. We've seen Google do some pretty wild things over the past couple of years when leveraging AI. But its newly announced Project Genie has to be up there on that list, giving users a look at the future (via 9to5Google). For the time being, this is just a small experiment, as Google puts it, allowing users to "create, explore and remix their own interactive worlds." And judging from the demo video attached, this is quite an impressive experiment to say the very least. You're going to want to try this While I'd love to give this a try myself, it's only currently open to Google AI Ultra subscribers in the United States. Just in case you weren't aware, that's Google's top-tier plan that costs $250/month. So, as you can imagine, access to this feature will be a bit limited for now. But judging from the video, it seems pretty simple to use. By just entering a prompt, you can create your own interactive world. You can even navigate in them, and can even take things further by uploading your own images in order to really customize the experience. What makes this all the more impressive is that it's all happening in real time. Google breaks down Project Genie into three parts: World sketching, exploration, and remixing. The interesting part of all of this is that you can fine tune the world with new prompts. Google is being responsible Like many of its other AI projects, Google is being as responsible as it can when it comes to the tools available. After all, the brand has pledged to "build AI responsibly to benefit humanity." With that said, Google does share that Project Genie does have room for improvements. Perhaps the biggest one is that the experience is now limited to just 60 seconds. Google also shares that its generated worlds may not look true to life, and that the controls might not also be as responsive as they should be. Then there's also the response of characters and objects in the generated worlds, as they might not "adhere to real-world physics." For now, this is pretty good Again, watching the demo video will leave most people speechless. It's an impressive piece of tech that's available to users right now. Naturally, as time passes, we expect this or something similar to be available to all Google AI subscribers. Subscribe to our newsletter for hands-on AI previews Join the newsletter for expert breakdowns and hands-on previews of AI innovations like Project Genie. Clear, practical context on how these tools work, their creative possibilities, and what to watch next. Subscribe By subscribing, you agree to receive newsletter and marketing emails, and accept Valnet's Terms of Use and Privacy Policy. You can unsubscribe anytime. But for now, you'll need to be an Ultra member in order to experiment with Google's latest tool. And, as stated before, you will need to be in the US, and Google also has an age limit for this feature, so you must be 18+ to access it.
[11]
Google's Project Genie lets you create your own 3D interactive worlds
This past summer, Google DeepMind debuted Genie 3. It's what's known as a world world, an AI system capable of generating images and reacting as the user moves through the environment the software is simulating. At the time, DeepMind positioned Genie 3 as a tool for training AI agents. Now, it's making the model available to people outside of Google to try with Project Genie. To start, you'll need Google's $250 per month AI Ultra plan to check out Project Genie. You'll also need to live in the US and be 18 years or older. At launch, Project Genie offers three different modes of interaction: World Sketching, exploration and remixing. The first sees Google's Nano Banana Pro model generating the source image Genie 3 will use to create the world you will later explore. At this stage, you can describe your character, define the camera perspective -- be it first-person, third-person or isometric -- and how you want to explore the world Genie 3 is about to generate. Before you can jump into the model's creation, Nano Banana Pro will "sketch" what you're about to see so you can make tweaks. It's also possible to write your own prompts for worlds others have used Genie to generate. One thing to keep in mind is that Genie 3 is not a game engine. While its outputs can look game-like, and it can simulate physical interactions, there aren't traditional game mechanics here. Generations are also limited to 60 seconds, as is the presentation, which is capped at 24 frames per second and 720p. Still, if you're an AI Ultra subscriber, this is a cool opportunity to see the bleeding edge of what DeepMind has been working over the past couple of years.
[12]
Google rolling out 'Project Genie' to generate playable worlds
Genie 3 is a "general-purpose world model capable of generating diverse, interactive environments." Google is now letting AI Ultra subscribers in the US access it with Project Genie. A world model simulates the dynamics of an environment, predicting how they evolve and how actions affect them. While Google DeepMind has a history of agents for specific environments like Chess or Go, building AGI requires systems that navigate the diversity of the real world. This experimental research prototype has you describe your environment ("What does your world look like?"), including how you want to explore it -- walking, riding, flying, driving, etc. -- and first/third-person point of view. After specifying your character ("Is it a person, animal, object, or something else?"), Project Genie creates a preview image, or World Sketching, using Nano Banana Pro. This lets you preview what "your world will look like and modify your image to fine tune your world prior to jumping in." You then "Create world" with users limited to 60-second sessions. The photorealistic worlds are in 720p resolution, while you get interaction rates of 20-24 frames per second. When you move, Genie 3 (first previewed in August) "generates the path ahead in real time based on the actions you take." Google is simulating the physics and interactions with "breakthrough consistency." You can adjust the camera as you interact with the world, with the ability to download videos of your walkthrough. Another feature lets you Remix Worlds: Remix existing worlds into new interpretations, by building on top of their prompts. You can also explore curated worlds in the gallery or in the <randomizer icon> for inspiration, or build on top of them. Besides the 60-second limitation, Google also warns about how: Google is working to improve Project Genie with "promptable events that change the world as you explore it." This demo will allow Google to "better understand how people will use world models in many areas of both AI research and generative media." Access begins "rolling out today to Google AI Ultra subscribers in the U.S. (18+), expanding to more territories in due course." ...our goal is to make these experiences and technology accessible to more users More broadly, world models are part of Google DeepMind's AGI mission. Simulating real-world scenarios has practical applications for "robotics and modelling animation and fiction, to exploring locations and historical settings."
[13]
Google Unleashes Project Genie, Letting Users Create Virtual Worlds From Still Photos
Google has begun rolling out Project Genie, a generative AI model that can create virtual worlds from text prompts or just a single photo. Previewed back in August, Genie, from Google DeepMind, is a world-building tool in which users create their own environments that expand in real time. Users create their own characters and choose how to get around the worlds they make -- from walking to riding, flying to driving, or anything they can think of. Users control the characters like a video game. Powered by Google's Nana Banana Pro and Gemini, the "World Sketching" feature lets users preview the world they're about to jump into and fine-tune it. "You can also define your perspective for the character -- such as first-person or third-person -- giving you control over how you experience the scene before you enter," Google says in a blog post. Project Genie is still a work-in-progress. Google warns that the worlds "might not look completely true-to-life or always adhere closely to prompts or images, or real-world physics." There's also a 60-second limit to generations. Back in August, Google said that there would be "promptable events" that alter the world as the user ventures through. That feature is not yet available in the prototype that is being rolled out today. It is exclusively available to Google AI Ultra subscribers in the U.S. only. There's also a minimum age requirement of 18-years-old. At $250 per month, Google AI Ultra is the most expensive subscription tier. Only a lucky few will be able to get their hands on the model imminently. It would be fascinating and potentially horrifying to see what would happen if a well-known photograph were used as a prompt. Photographers might also be curious to see what would happen if they used one of their own works as the starting point of a virtual world. People will undoubtedly use Genie to create virtual worlds of recognizable IP. When OpenAI released Sora, its AI video generator, people immediately began taking famous characters from movies, TV, and pop culture and getting them to act out silly and outrageous scenes.
[14]
How to try Google Project Genie, a powerful new 'world model'
An example of a virtual world created by Project Genie. Credit: Google Google has launched a new AI experiment called Project Genie, a tool that lets users build their own interactive virtual worlds. Project Genie comes from the Google DeepMind research lab and is now available to Google AI Ultra subscribers in the United States (users must be over the age of 18). If you're an AI Ultra subscriber -- the AI subscription plan is priced at $249.99 -- you can start building worlds right away by visiting Google Labs and navigating to Project Genie. A video released along with Project Genie shows exactly how users can create custom virtual worlds. Crucially, Project Genie doesn't just create a virtual environment; it also allows the user to create a character to explore and interact with the world. Users can even use prompts to create their own mini-games, using their keyboard's arrow keys to control the character. (One reporter at The Verge quickly realized he could use Project Genie to make Zelda and Super Mario knockoffs.) Google says Project Genie can generate worlds at 720p resolution and 24 frames per second. Considering how much time and labor go into creating video games, it's remarkable that Project Genie can create such detailed, interactive environments using AI. That said, Project Genie is primarily a research project, and while it can be entertaining to play with, it doesn't really do anything. (Keep reading to learn about potential future applications.) The created worlds have realistic physics, and objects in the world react as the user interacts with them. Users can also modify and remix their virtual worlds to their heart's content. Project Genie is powered by Genie 3, which is a powerful "world model." (Google says that Nano Banana and Gemini are also used to generate worlds.) A world model is an AI program that can generate a virtual world from text, images, and other inputs. Google actually teased Genie 3 back in August 2025, calling it "a key stepping stone on the path to AGI." The new landing page for Project Genie elaborates: "Genie 3 represents a major leap in capabilities - allowing agents to predict how a world evolves, and how their actions affect it. Genie 3 makes it possible to explore an unlimited range of realistic environments. This is a key stepping stone on the path to AGI - enabling AI agents capable of reasoning, problem solving, and real-world actions." Artificial general intelligence, or AGI, is a term for a hypothetical AI tool that can perform most tasks as capably (or more so) than the average human. That means it could do your job with little or no monitoring. To achieve true AGI, AI companies need to build models that can make sense of the environment and understand how to interact with it. World models are an emerging frontier in AI research, and companies building video models have attracted significant investment over the past year. AI company WorldLabs recently raised $230 million in funding, while video model maker Luma AI raised $900 million. Besides AGI, what are the practical uses for world models? As one example, car companies could build a world to safely test autonomous vehicles. There are also possible applications for education and video game development.
[15]
Google's Genie AI will let you create a whole visual world of your imaginations
Google's Project Genie lets AI generate interactive worlds from simple inputs, offering a glimpse at the future of immersive AI experiences, currently rolling out exclusively to AI Ultra subscribers in the U.S. Google has a long history of quietly building some of the most capable AI-based chatbots and services, but its newest experiment, called Project Genie, is unlike anything most of us have seen. Developed by Google DeepMind, Genie is a text-to-digital-world generator, if you will, capable of generating interactive digital environments. While we've been getting used to generating text, images, or even videos with short text-based prompts, Genie can quite literally convert a simple sketch, a photo, or even a concise prompt into a sandbox-style world on a computer you can move around in, through your own digital character. What is Project Genie anyway? No game engine, no coding, no 3D design abilities, and no top-grade hardware requirements. Project Genie is what Google calls a "world model," a generative AI model that renders the digital environment (in real-time) as you move around. Recommended Videos In other words, it predicts your movement and its effects on the surroundings, and builds a world around it while accounting for the physics. Google has made this possible by combining three of its most advanced models: Genie 3, Gemini, and Nano Banana Pro. So, with the right prompt or the right image, you can create a simulation of any real-world scenario, "from robotics and modelling animation and fiction, to exploring locations and historical settings." How does AI-based digital world creation work? The interactive digital experience is based on three core capabilities: world sketching, world exploration, and world remixing. World sketching involves transforming a piece of text or an image into a living, expanding environment. Then, world exploration is about interacting with the elements in the digital world. The model then figures out the cause-and-effect on its own. Last but not least, users can also remix existing worlds by taking inspiration from or building on them. Traditionally, creating interactive worlds has been very slow, technically demanding, and expensive, which is why open-world game developers (you know which one I'm talking about) might take more than a decade to launch new versions. Access reserved for Google's most expensive subscription tier Project Genie could practically revolutionize the industry by enabling quick game prototyping, simulation testing, and creative experimentation that require a fraction of the resources. The sad part, however, is that Project Genie isn't a polished consumer product (yet). For now, it is only available for Google AI Ultra subscribers in the United States who are 18 or older. It is the most expensive subscription tier Google currently offers ($249.99 per month). Given that the AI-based experience generator is currently under development, users shouldn't expect the generated worlds to be perfect. The character control could also feel clunky. But even so, Project Genie could be the beginning of something huge, a glimpse at a future where AI doesn't just generate content, but builds entire experiences on demand.
[16]
Project Genie: Experimenting with infinite, interactive worlds
This content is generated by Google AI. Generative AI is experimental In August, we previewed Genie 3, a general-purpose world model capable of generating diverse, interactive environments. Even in this early form, trusted testers were able to create an impressive range of fascinating worlds and experiences, and uncovered entirely new ways to use it. The next step is to broaden access through a dedicated, interactive prototype focused on immersive world creation. Starting today, we're rolling out access to Project Genie for Google AI Ultra subscribers in the U.S (18+). This experimental research prototype lets users create, explore and remix their own interactive worlds. A world model simulates the dynamics of an environment, predicting how they evolve and how actions affect them. While Google DeepMind has a history of agents for specific environments like Chess or Go, building AGI requires systems that navigate the diversity of the real world. To meet this challenge and support our AGI mission, we developed Genie 3. Unlike explorable experiences in static 3D snapshots, Genie 3 generates the path ahead in real time as you move and interact with the world. It simulates physics and interactions for dynamic worlds, while its breakthrough consistency enables the simulation of any real-world scenario -- from robotics and modelling animation and fiction, to exploring locations and historical settings. Building on our model research with trusted testers from across industries and domains, we are taking the next step with an experimental research prototype: Project Genie. Project Genie is a prototype web app powered by Genie 3, Nano Banana Pro and Gemini, which allows users to experiment with the immersive experiences of our world model firsthand. The experience is centred on three core capabilities:
[17]
Google introduces Project Genie virtual world generator - SiliconANGLE
Google LLC today introduced Project Genie, a tool that makes it possible to generate three-dimensional virtual environments using prompts. The tool is initially available in the U.S. to users with Google AI Ultra subscription. The $250 per month plan offers several features not included in standard Google accounts. Users receive higher AI usage limits, 30 terabytes of cloud storage and a faster version of the company's Antigravity agentic coding tool. Project Genie is based on a world model called Genie 3 that Google debuted in August. It can generate interactive 3D environments based on natural language instructions provided by the user. According to Google, the model renders virtual worlds with a resolution of 1280 by 720 pixels at a rate of up to 24 frames per second. Project Genie enables users to interact with an AI-generated environment for up to 60 seconds at a time. According to The Register, the underlying Genie 3 model is capable of powering significantly longer user sessions. That hints future updates to the tool may focus on increasing the maximum length of Project Genie interactions. Users can create a virtual world by entering instructions into two input fields. The first text box takes a description of the 3D environment as input, while the other makes it possible to describe the avatar that will navigate the environment. Users can customize not only the rendering style but also the camera angle. Project Genie turns user instructions into a preview sketch using Nano Banana Pro, an image generation model that Google released in November. The algorithm has several features that lend themselves well to virtual world rendering. One of them is its ability to turn relatively simple sketches into photorealistic 3D objects. After Project Gemini generates a preview of a virtual world, users can customize it further by entering additional instructions. Alternatively, they can fine-tune one of the pre-packaged designs that Google provides with the tool. A download tool makes it possible to save virtual environment interactions as a video. "Your world is a navigable environment that's waiting to be explored," Google staffers Diego Rivas, Suz Chambers and Elliott Breece wrote in a blog post. "As you move, Project Genie generates the path ahead in real time based on the actions you take. You can also adjust the camera as you traverse through the world." Google plans to bring the tool to international markets down the road. Given that the company offers AI models through its public cloud, it's possible that a version of Project Genie will eventually become available to developers via an application programming interface. The kind of virtual environments that Project Genie renders can be used to generate visual training data for AI projects.
[18]
Google launches Project Genie: Create interactive worlds with AI
Google launched Project Genie on January 29, 2026, for Google AI Ultra subscribers aged 18 and older in the United States. This experimental research prototype, powered by the Genie 3 world model, Nano Banana Pro, and Gemini, enables users to create, explore, and remix interactive worlds using text prompts and images. Diego Rivas, Product Manager at Google DeepMind, Elliot Breece, Product Manager at Google Labs, and Suzanne Chambers, Director at Google Creative Lab, oversee the project. The prototype functions as a web app, allowing direct experimentation with the underlying world model technology. Genie 3 serves as the core power behind the prototype. It generates real-time paths as users move and interact within the environments. Users input text prompts alongside generated or uploaded images to build navigable worlds. The system supports sketching, exploration, and remixing of these creations. In August, Google previewed Genie 3 to trusted testers. This general-purpose world model generates diverse, interactive environments. Testers created numerous worlds and identified new applications. Project Genie represents the next phase, broadening access through an interactive prototype centered on immersive world creation. A world model simulates the dynamics of an environment. It predicts how the environment evolves and how user actions influence it. Google DeepMind maintains a history of developing agents for specific environments, such as Chess and Go. Extending this to general AI systems requires handling the diversity of the real world. Project Genie centers on three core capabilities. World sketching allows users to prompt with text and images to create living, expanding environments. Users define their character, the world, and exploration modes, including walking, riding, flying, driving, or other actions. Integration with Nano Banana Pro provides preview and fine-tuning options before exploration begins. Users select character perspectives, either first-person or third-person, to shape the viewing experience. World exploration turns the sketched environment into a navigable space. As users move, the system generates the path ahead in real time, responding to taken actions. Camera adjustments remain available during traversal, enabling dynamic navigation through the generated world. World remixing lets users build new interpretations atop existing worlds by modifying their prompts. A gallery offers curated worlds for starting points, while a randomizer provides additional inspiration. Completed creations support video downloads capturing the worlds and explorations. * World sketching: Prompt with text and generated or uploaded images to create a living, expanding environment. Create your character, your world, and define how you want to explore it -- from walking to riding, flying to driving, and anything beyond. For more precise control, World Sketching is integrated with Nano Banana Pro, allowing preview and fine‑tuning before jumping in. You can also set the character perspective (first‑person or third‑person) to control the viewing experience. * World exploration: Your world is a navigable environment waiting to be explored. As you move, Project Genie generates the path ahead in real time based on the actions you take. You can also adjust the camera while traversing the world. * World remixing: Remix existing worlds into new interpretations by building on top of their prompts. Explore curated worlds in the gallery or use the randomizer for inspiration, then build on them. Once done, you can download videos of your worlds and explorations. Project Genie operates as an experimental research prototype within Google Labs. Google emphasizes responsible development of AI systems to benefit humanity. As an early research model, Genie 3 exhibits specific areas needing improvement. * Realism and adherence: Generated worlds might not look completely true-to-life or always adhere closely to prompts, images, or real-world physics. * Character control: Characters can sometimes be less controllable or experience higher latency in control. * Generation length: Generations are limited to 60 seconds. Certain Genie 3 capabilities previewed in August remain absent from this prototype. Promptable events, which alter the world during exploration, fall into this category. Further details on model limitations and planned improvements appear in dedicated resources. Building from trusted tester feedback, Google shares the prototype with Google AI Ultra subscribers to observe usage patterns in AI research and generative media. Access rollout starts on January 29, 2026, for eligible users in the United States. Expansion to additional territories follows in due course. Google intends to extend access to Project Genie and its world-building technology as development progresses.
[19]
Google Wants Users to Generate their Own Virtual Worlds with Project Genie - Phandroid
AI's continued reach and influence into different commercial and creative spaces has undoubtedly been a point of contention in recent times, although it's pretty clear that major players within the industry are hell-bent on pushing it to its limits, often in ways that have yet to fully manifest. take for example Google, which recently announced the launch of Project Genie. For those unfamiliar with it, Project Genie is an experimental research prototype that allows users to create and explore their own interactive, real-time environments. The platform runs on the new Genie 3 model alongside Nano Banana Pro and Gemini, and is able to simulate physics and environmental dynamics in real time, predicting how a world evolves based on a user's specific actions. This allows the system to simulate a vast range of scenarios, from robotics and animation to historical settings and fictional landscapes. A web-based app, Google says that project Genie is built around World Sketching, World Exploration, and World Remixing. Users can begin by prompting the AI with text or images to define their environment and character. They can also fine-tune their world's aesthetics and choose different visual perspectives before entering the simulation. Once inside, the environment generates dynamically as the user moves, allowing for seamless exploration. Additionally, the "Remix" feature enables users to build upon existing worlds or curated examples from a public gallery for example, with the option to download videos of their journeys. It should be noted though that Project Genie is currently an experimental prototype within Google Labs. As an early-stage research model, the system currently has limitations such as a 60-second cap on generations, and there might be instances where users might run into inconsistencies in physics or prompt adherence. Starting today, Google is rolling out availability for Google AI Ultra subscribers aged 18 and older within the United States, with plans to expand to additional territories in the future.
[20]
Google's Premium Gemini Subscribers Can Now Generate Playable AI Worlds
It lets users create, explore, and remix their own interactive worlds Months after previewing the Genie 3 artificial intelligence (AI), Google is now rolling out access to Project Genie as an experimental research prototype. On Thursday, the Mountain View-based tech giant announced that select premium Gemini subscribers will now be able to generate AI-powered playable virtual worlds in Gemini. The experience is powered by the same Genie 3 model and also leverages Nano Banana and Gemini 3 to let users generate completely interactive worlds based on natural language prompts. Notably, Genie 3 was first announced in August 2025. Google Is Rolling Out Project Genie In a blog post, the tech giant announced and detailed Project Genie. It is currently available to Google AI Ultra subscribers in the US for users above the age of 18. The feature will let them create, explore, and even remix the interactive worlds. In the US, the Ultra plan is priced at $249.99 (roughly Rs. 22,980) per month. The same plan is priced at Rs. 24,500 a month in India, although the feature is not available in the country. Genie is one of the major projects undertaken by Google DeepMind. The division has built three generations of large language models (LLMs) focused on virtual world generation, and says it supports Google's larger artificial general intelligence (AGI) mission. Genie 3 is the most advanced model, which can not only generate 2D and 3D virtual worlds, but can also render the path ahead in real time as the user moves and interacts with the world. The model can also maintain realistic physics even during interactions in the dynamic worlds. The tech giant says its application spans robotics, modelling animation, and more. With Project Genie, Gemini users can add a prompt or upload images to create an interactive and expanding environment. They can also create a character and define how they want to explore the world. The feature supports walking, riding, flying, driving, and more. Additionally, the World Sketching feature, which is powered by Nano Banana Pro, also lets users have granular control over the generated environment. The World Sketching tool lets users can preview what the world would look like, and make changes to fine-tune the final output before exploring. Users can also select either first-person or third-person perspectives. The feature also comes with a Remix tool, where users can build on top of their previously generated worlds. Since it is a research prototype, Google warns that the generated worlds might not look completely true-to-life or closely adhere to prompts and images. It also added that characters can sometimes be less controllable or experience higher latency when exploring. Additionally, all world generations are limited to 60 seconds of exploration time.
[21]
Google DeepMind Introduces Project Genie for Interactive AI World Building | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. Project Genie is powered by Genie 3, a general-purpose world model that can produce diverse, explorable worlds from simple text and image prompts. Users can create landscapes, characters and environments that evolve in real time, with interactive elements responding to movement and actions. The prototype is part of Google's broader research into advanced AI systems that go beyond static text or image generation toward dynamic "world" building. Simulations can range from natural settings like deserts and forests to complex ecosystems and fantastical scenarios, all generated from user descriptions. Google is opening access to Project Genie for Google AI Ultra subscribers in the United States, allowing them to experiment with the world-generation features. Genie 3 was unveiled in 2025 as a breakthrough "world model" capable of building interactive environments that maintain continuity and logic over several minutes of exploration, marking a departure from earlier, shorter-lived scene generation systems. The introduction of Project Genie arrives amid intense competition in generative AI, with companies like OpenAI and Meta also advancing systems that support dynamic content creation. World models such as Genie are seen by researchers as key steps toward more general forms of AI that can learn and reason within simulated environments. There has also been a broader push in the AI industry toward spatial intelligence, a technical category that emphasizes an AI's ability to understand and generate three-dimensional environments. As reported by PYMNTS in November, World Labs recently introduced Marble, a multimodal world model aimed at enabling AI systems to perceive, predict and interact with physical space. Marble can generate navigable 3D scenes from text, images, video or sketches and includes interfaces that let users lay out environments before refinement, reflecting a shift beyond traditional language and image models.
[22]
Google rolls out Project Genie for creating interactive AI worlds
Google has started rolling out Project Genie, an experimental research prototype that allows users to create, explore, and remix interactive virtual worlds. Project Genie is built on Genie 3, a general-purpose world model developed by Google DeepMind. The tool is hosted within Google Labs and provides hands-on access to real-time world generation and exploration. Genie 3 and world models A world model simulates how an environment behaves and changes over time, predicting how actions influence outcomes. Earlier AI systems developed by Google DeepMind focused on fixed environments such as Chess and Go. Building more general AI systems, however, requires models that can operate across diverse and dynamic settings. Genie 3 addresses this requirement. Unlike static 3D environments, the model generates the world continuously in real time as users move and interact. It simulates physics, object interactions, and environmental consistency, supporting use cases across robotics research, animation, fiction, modeling, and historical exploration. Project Genie builds on insights gathered from trusted testers across multiple industries and domains. Project Genie is a web-based prototype powered by Genie 3, Nano Banana Pro, and Gemini. It is structured around three core functions: World sketching Users can generate worlds using text prompts along with generated or uploaded images. The tool allows users to define characters, environments, and movement methods such as walking, riding, flying, or driving. World Sketching is integrated with Nano Banana Pro, enabling users to preview and refine visual elements before entering the environment. Users can also select viewing perspectives, including first-person or third-person, prior to exploration. Once created, the environment becomes fully navigable. As users move through the space, Project Genie generates the path ahead dynamically in real time, responding to user actions. Camera positioning can also be adjusted during exploration. World remixing Users can remix existing worlds by building on original prompts. The prototype includes curated worlds and a randomizer gallery for exploration and experimentation. Users can download videos capturing their worlds and exploration sessions. Project Genie is an early-stage research prototype, and Google has outlined several current limitations: Access to Project Genie is rolling out starting today for Google AI Ultra subscribers in the U.S. (18+). Google states that further improvements and updates are planned, and access will expand to additional regions over time.
[23]
Google Project Genie is DeepMind's latest attempt to make AI worlds you can actually wander around
Early access is limited, including short runs and occasional wonky physics. Google has spent years showing flashes of what "world models" could look like, the kind of AI that does more than generate a picture or a clip, and instead simulates an environment you can interact with. Project Genie is the latest, and most user-facing, version of that idea, a web-based prototype in Google Labs that lets people create, explore and remix interactive worlds generated by Google DeepMind's Genie 3 model. Using the service is just like most AI tools of the day, it's easy to get started with but surprisingly tricky to build: type a prompt (or bring an image), hit generate, then move through a world that keeps forming in front of you. It is less "here's a finished 3D scene", and more "here's a world that continues to exist as you navigate it". Google is initially rolling out access to Google AI Ultra subscribers in the US who are 18+, with other territories due later. In DeepMind's framing, a world model is an AI system that predicts how an environment evolves, including how it responds to actions inside it. That matters for more than flashy demos, because the same core capability could, in theory, help AI agents learn to plan, reason, and act in complex settings, not just solve narrow games like Go or Chess. Genie 3 is positioned as part of that broader push towards systems that can handle diverse, open-ended environments. What distinguishes Genie 3, at least on paper, is real-time generation "ahead" of the user as they move, rather than presenting a static snapshot. Google also stresses consistency, meaning the world should not collapse into incoherence the moment you turn around, even though it is being generated on the fly. Project Genie is not being pitched as a full consumer launch, and Google is fairly explicit about its "experimental research prototype" label. Still, it is a meaningful step, because it packages a very research-y idea into something you can try, rather than just watch in a highlight reel. Google describes three key pillars to showcase how the tool works.. First is world sketching. You can prompt with text and either a generated or uploaded image to set the starting point, then decide the vibe, the character, and how you want to traverse the scene, from walking to flying to driving. Google is also tying this to Nano Banana Pro, an image generation and editing model, so you can iterate on a preview image before entering the world. Perspective controls, like first-person or third-person, are part of the setup too. Second is world exploration. Once you are in, your movement steers generation, with the path ahead created in real time. Camera adjustments are supported as you roam. Third is world remixing. You can take an existing world and build on top of its prompt, or browse curated worlds for inspiration. Google also says you can download videos of your worlds and your explorations, which is a subtle hint at who else this is for, not just researchers but also creators who want quick, synthetic footage. Google is upfront about what is missing and what breaks. Generated worlds may not be true-to-life, may not always follow prompts or images closely, and physics can be unreliable. Character control can be finicky, and latency can spike. On top of that, there is a hard limit of 60 seconds per generation in the current prototype. Those limitations indicate that Project Genie is in its early stages. A 60-second cap means it is closer to an interactive vignette than a world you can genuinely inhabit. Unreliable physics and control mean it is better for exploration and mood than for challenges that require tight interaction. Even Google notes that some Genie 3 capabilities previously discussed are not yet present in the prototype, such as "promptable events" that change the world as you explore. Practically, Project Genie is a preview of how generative media could evolve from passive output to interactive output. Today's text-to-video tools create something you watch. A world model creates something you can navigate, even if only briefly. If Google can extend duration, improve stability, and give creators more control, this starts to look like a new kind of content pipeline, one that sits somewhere between game engines and generative video. Strategically, Google is also clearly positioning world models as part of its longer-term AGI ambitions, a way to train agents that can cope with messy, varied environments rather than brittle, narrow tasks. That is the subtext behind tying Genie 3 to robotics, simulation, and broader "real-world scenario" modelling, even if Project Genie itself feels like a creative playground right now. For the moment, Project Genie is gated: Google AI Ultra subscribers in the US, 18+, with expansion promised "in due course". If this develops the way Google hopes, the next questions will be less about whether it can generate a whimsical landscape, and more about control, safety, and workflow. To get onto the Google AI Ultra plan, you will have to fork out $249.99 per month. For now, Project Genie feels like an honest experiment: exciting in concept, clearly constrained in practice, and most revealing as a glimpse of what happens when generative AI stops producing "outputs" and starts producing places.
Share
Share
Copy Link
Google DeepMind opens Project Genie to AI Ultra subscribers, letting them generate explorable 3D environments from simple text or image prompts. Built on the Genie 3 world model, the experimental tool renders 720p worlds at 24 fps but limits exploration to 60 seconds. The move signals Google's push toward AGI development while raising concerns in the gaming industry about AI's role in creative work.
Google DeepMind has opened access to Project Genie, an experimental AI world generator that allows Google AI Ultra subscribers in the United States to create interactive worlds from text or image prompts
2
. The tool, available through Google Labs starting Thursday, represents a cleaned-up version of the Genie 3 world model that Google showcased last year but only provided to a small group of trusted testers1
. Access requires a Google AI Ultra subscription at $250 per month, reflecting the substantial compute costs involved in generating AI video content1
.
Source: Google
Project Genie operates through a process Google calls "world sketching," where users provide text or image prompts to define both an environment and a main character
1
. The system leverages Google's Nano Banana Pro image generation model alongside Gemini to first create a still image, which users can modify before Genie transforms it into an explorable world1
. The resulting environments render at 720p resolution and approximately 24 frames per second, with users navigating through WASD controls in either first or third person view1
2
. Google DeepMind explains that Genie 3 environments are "auto-regressive," created frame by frame based on world descriptions and user actions, maintaining consistency for several minutes with memory recalling specific interactions for up to a minute5
.
Source: Android Police
Google stresses that Project Genie remains a research prototype with notable limitations
1
. World generation and navigation are currently capped at 60 seconds due to budget and compute constraints, with each session requiring dedicated chip resources2
. Shlomi Fruchter, a research director at DeepMind, explained that "when you're using it, there's a chip somewhere that's only yours and it's being dedicated to your session"2
. Users experience some input lag, and the model can produce inconsistent results that don't always look or behave correctly1
. Agents interacting with generated worlds can only perform a limited range of actions, and the system struggles with rendering legible text and simulating real-world locations accurately5
.Testers have encountered evolving content restrictions within Project Genie. The Verge reported that initially, the system generated knockoffs of Nintendo games like Super Mario and The Legend of Zelda, but by the end of testing, some prompts were blocked due to "interests of third-party content providers"
1
3
. Safety guardrails now prevent generation of content resembling copyrighted material from Disney and other companies, following Disney's December cease-and-desist accusing Google of copyright infringement by training AI models on its characters and IP2
. TechCrunch testing revealed the model excelled at creating whimsical worlds like marshmallow castles with chocolate rivers in claymation style, but struggled with realistic scenarios2
.
Source: The Verge
Related Stories
Google DeepMind positions world models as crucial steps toward achieving artificial general intelligence (AGI)
2
. The company envisions a go-to-market plan starting with video games and entertainment before expanding to training AI agents in simulation, including embodied agents like robots2
. Google states that "building AGI requires systems that navigate the diversity of the real world," with world models simulating environment dynamics and predicting how actions affect them5
. The release comes as competition in world models intensifies, with Fei-Fei Li's World Labs releasing Marble, Runway launching its own world model, and former Meta chief scientist Yann LeCun's AMI Labs focusing on similar technology2
.While Google clarifies that Project Genie "is not a game engine and can't create a full game experience," the company sees potential to "augment the creative process, enhancing ideation, and speeding up prototyping"
5
. This positioning has raised concerns in a gaming industry already struggling with widespread layoffs. According to Informa's Game Developers Conference report, 33 percent of surveyed US game developers experienced at least one layoff in the past two years, with 52 percent believing AI has a negative impact on the games industry—a sharp increase from 30 percent last year5
. Google aims to gather user feedback and training data through Project Genie's broader release, with plans to expand access over time as AI hardware efficiency improves1
. Users can remix pre-built worlds with new characters and visual styles, and download videos of their explorations1
.Summarized by
Navi
[3]
[5]
06 Aug 2025•Technology

05 Dec 2024•Technology

30 Jan 2026•Technology
1
Policy and Regulation

2
Policy and Regulation

3
Technology
