Curated by THEOUTPOST
On Tue, 18 Mar, 12:03 AM UTC
3 Sources
[1]
Klang Games partners with Google Cloud on AI-driven simulation game Seed
Klang, the pioneering studio behind the ambitious simulation Seed, announced a strategic partnership with Google Cloud to bring its vision of a dynamic, AI-driven society to life. By using Google Cloud's cutting-edge technologies, including Google Kubernetes Engine (GKE), Vertex AI, and Gemini model, Berlin-based Klang is creating a persistent, evolving virtual world inhabited by hundreds of thousands of autonomous virtual humans, known as Seedlings. Seed is a groundbreaking massively multiplayer online (MMO) experience where AI-powered Seedlings live, interact, and evolve in real-time, even when players are offline. Klang Games is using Google Cloud's AI tools to bring its society simulation game seed to life. Seed is an ambitious game simulating the future of humanity. At the end of the 21st century, humanity left Earth to find a new home. The planet Avesta in the Tau Ceti system was chosen for its many similarities to Earth. On this new lush planet, players are in charge of nurturing and guiding human inhabitants called "Seedlings" -- each with a mind of their own. The player guides Seedlings by setting broad, strategic goals rather than micromanaging every moment of their day. This includes choosing educational paths and specializations -- such as focusing on particular skill sets or fields -- shaping their future careers and personal growth. Your influence also extends to relationships, purchases, housing, and décor, defining each Seedling's aspirations even when you're not online. A calendar system is currently under development to track major life events, making it simple to see what has happened or what's coming up. Beyond that, you can talk directly with your Seedling about their experiences -- like catching up with a friend or relative you haven't seen in a while. Google Cloud's infrastructure provides the scalability, performance, and reliability necessary to power Seed's complex ecosystem, ensuring seamless growth and continuous operation. "We're very excited and we've been running on the Google Cloud since the very start," said Mundi Vondi, CEO of Klang, in an interview with GamesBeat. Berlin-based Klang Games got started with prototyping with a small team of four people back in 2016. They got some funding in 2017 and a larger round in 2019. Now the company has more than 100 people and the company hopes to have a soft launch later in 2025. "It's in a pretty decent state right now. It's playable," Vondi said. "I think we have something really special here." Jack Buser, global director for games at Google Cloud, said in an interview with GamesBeat that Klang Games has been ahead of the curve. "This is a sign of things that we see in development, but they are further along than most. We'll be able to go into this GDC and show how far Seed has come and how they're using Google Cloud to scale inference, things like that. It is a sign of the future," Buser said. The company started with four people working on prototypes in 2016. They got funding in 2017 and kept growing, said Vondi. The company now has more than 100 people and it is aiming to launch into early access this year. And now the project has incorporated generative AI into the gameplay. "At Klang, we are building the largest simulation of humanity's future ever attempted, with an unprecedented level of detail," said Vondi. "From vast cities where every house can be furnished and fully operational, to Seedlings that behave, remember, learn, and grow with a level of fidelity like nothing we've seen before." Vondi added, "It's incredibly exciting to partner with Google Cloud, who is helping us build the technology to scale this vision beyond what was previously possible, connecting thousands, if not millions, of players." Google Cloud's technology enables Seed's Seedlings to exhibit unique personalities, form relationships, and shape emergent societies through natural conversations and persistent interactions. Vertex AI and Gemini 2.0 allow for rich, nuanced character interactions, while GKE ensures the complex and ever- growing world scales seamlessly. Multi-region GKE clusters facilitate fast expansion of the game servers ensuring low latency and seamless transition between in-game zones to establish and maintain a scalable backbone for the massive, persistent online world with a large number of players. "Klang's vision for Seed represents the cutting edge of interactive entertainment, and we're thrilled that Google Cloud's technology is empowering them to bring it to life," Buser. "The scale and complexity of Seed demands a robust and innovative cloud infrastructure with state of the art AI, and we're proud to provide the technology and expertise necessary to support their ambitious journey." "Seed requires a cloud platform that can handle the complexity of a continuously evolving society simulation," said Oddur Magnusson, CTO of Klang. "Google Cloud with Vertex AI provide the performance, reliability, and AI capabilities we need to bring this ambitious vision to life." The partnership also includes Google's consulting and technical expertise to optimize generative AI solutions for cost and scalability, ensuring Seed delivers an unparalleled AI-powered, in-game experience. Living games This kind of game is what Buser had in mind when he predicted that AI would lead to "living games." "We had several other data points that made it become clear that this was the direction that the industry is moving in and, as we stand here today, all that momentum's coming for real," Buser said. Vondi added, "It's truly incredible what's capable of doing that with AI. It's really coming to life in a very exciting way." Generative AI came along at the perfect time for Seed. Before, we would have relied on emojis and limited interactions like in The Sims, as genuine conversations between Seedlings and players weren't possible. Now, with generative AI, the devs can provide each Seedling with their context - personality traits, needs, history, and aspirations -- allowing them to hold conversations. Vondi said you can ask the nihilist to watch TV, but it may reply, "I will watch TV. Do not expect me to enjoy it." The characters will be able to reference things from the backstory as their memories and they can use these memories in conversations with players. Buser said the game can scale as needed thanks to the backend infrastructure where the game can check a number of different AI models to see what will be the most relevant to use. Gemini 2.0 Flash delivers low-latency answers where time delays matter for inference at scale, Buser said. That's where conversations are happening all over the place in the game. "We will launch with some scale limitations but after we progress" it will get better, he said, in terms of doing AI processing on a large scale, Vondi said. Buser said a traditional online game will tap infrastructure such as gamer servers and services, databases and analytics tools. "What Klang is designing with Seed is truly just a fantastic example of a living game where you have an AI-native game design," Buser said. We sat down for a conversation to see the vision that Klang had and how it meshed with the thinking we had done at Google Cloud. These things came together. It's not often you have that kind of serendipity where the pieces fit together." The more consumers buy AI PCs, the more it's possible to offload work to local computers and further reduce the cost of AI. "We've done a lot of thinking at Google over the last couple years about building systems within GKE that can assist the game and knowing which workloads are going to work best on the cloud, and which workloads are going to work best on the edge by device," Buser said. "It's top of mind for a lot of developers. If you are operating a game and you want to run inference on device, but the device isn't powerful enough, or perhaps you're already running enough inference workloads on the device that it can't take anymore, you might want to reroute that to the cloud."
[2]
Inworld AI showcases AI case studies as they move to production
The current AI ecosystem wasn't built with game developers in mind. While impressive in controlled demos, today's AI technologies expose critical limitations when transitioning to production-ready games, said Kylan Gibbs, CEO of Inworld AI, in an interview with GamesBeat. Right now, AI deployment is being slowed because game developers are dependent on black-box APIs with unpredictable pricing and shifting terms, leading to a loss of autonomy and stalled innovation, he said. Players are left with disposable "AI-flavored" demos instead of sustained, evolving experiences. At the Game Developers Conference 2025, Inworld isn't going to showcase technology for technology's sake. Gibbs said the company is demonstrating how developers have overcome these structural barriers to ship AI-powered games that millions of players are enjoying right now. Their experiences highlight why so many AI projects fail before launch and more importantly, how to overcome these challenges. "We've seen a transition over the last few years at GDC. Overall, it's a transition from demos and prototypes to production," Gibbs said. "When we started out, it was really a proof of concept. 'How does this work?' The use case is pretty narrow. It was really just characters and non-player characters (NPCs), and it was a lot of focus on demos." Now, Gibbs said, the company is focused on production with partners and large scale deployments and actually solving problems. Getting AI to work in production Earlier large language models (LLMs) were too costly to put in games. That's because it could cost a lot of money to send a user's query to AI out across the web to a datacenter, using valuable graphics processing unit (GPU) time. It sent the answer back, often so slowly that the user noticed the delay. One of the things that has helped with AI costs now is that the AI processing has been restructured, with tasks moving from the server to the client-side logic. However, that can only really happen if the user has a good machine with a good AI processor/GPU. Inference tasks can be done on the local machines, while harder machine learning problems may have to be done in the cloud, Gibbs said. "Where I think we're at today is we actually have proof that the stuff works at huge scale in production, and we have the right tools to be able to do that. And that's been a great and exciting transition at the same time, because we've now been focusing on that we've been able to actually uncover regarding the root challenges in the AI ecosystem," Gibbs said. "When you're in the prototyping demo mindset, a lot of things work really well, right? A lot of these tools like OpenAI, Anthropic are great for demos but they do not work when you go into massive, multi-million users at scale." Gibbs said Inworld AI is focusing on solving the bigger problems at GDC. Inworld AI is sharing the real challenges it has encountered and showing what can work in production. "There are some very real challenges to making that work, and we can't solve it all on our own. We need to solve it as an ecosystem," Gibbs said. "We need to accept and stop promoting AI as this panacea, a plug and play solution. We have solved the problems with a few partners." Gibbs is looking forward to the proliferation of AI PCs. "If you bring all the processing onto onto the local machine, then a lot of that AI becomes much more affordable," Gibbs said. The company is providing all the backend models and efforts to contain costs. I noted that Mighty Bear Games, headed by Simon Davis, is creating games with AI agents, where the agents play the game and humans help craft the perfect agents. "Companions are super cool. You'll see multi-agent simulation experiences, like doing dynamic crowds. If you're if you are focused on a character based experience, you can have primary characters or background characters," Gibbs said. "And actually getting background characters to work efficiently is really hard because when people look at things like the Stanford paper, it's about simulating 1,000 agents at once. We all know that games are not built like that. How do you give a sense of millions of characters at scale, while also doing a level-of-detail system, so you're maximizing the depth of each agent as you get closer to it." AI skeptics? I asked Gibbs what he thought about the stat in the GDC 2025 survey, which showed that more game developers are skeptical about AI in this year's survey compared to a year ago. The numbers showed 30% had a negative sentiment on AI, compared to 18% the year before. That's going in the wrong direction. "I think that we've got to this point where everybody realizes that the future of their careers will have AI in it. And we are at a point before where everybody was happy just to follow along with OpenAI's announcements and whatever their friends were doing on LinkedIn," Gibbs said. People were likely turned off after they took tools like image generators with text prompts and these didn't work so well in prodction. Now, as they move into production, they're finding that it doesn't work at scale. And so it takes better tools geared to specific users for developers, Gibbs said. "We should be skeptical, because there are real challenges that no one is solving. And unless we voice that skepticism and start really pressuring the ecosystem, it's not going to change," Gibbs said. The problems include cloud lock-in and unpredictable costs; performance and reliability issues; and a non-evolving AI. Another problem is controlling AI agents effectively so they don't go off the rails. When players are playing in a game like Fortnite, getting a response in milliseconds is critical, Gibbs said. AI in games can be a compelling experience, but making it work with cost efficiency at scale requires solving a lot of problems, Gibbs said. As for the changes AI is bringing, Gibbs said, "There's going to be a fundamental architecture change in how we build user-facing AI apps." Gibbs said, "What happens is studios are building with tools and then they get a few months from production and they're like, 'Holy crap! This doesn't work. We need to completely change our architecture.'" That's what Inworld AI is working on and it will be announced in the future. Gibbs predicts that many AI tools will be quickly outdated within a matter of months. That's going to make planning difficult. He also predicts that the capacity of third-party cloud providers will break under the strain. "Will that code actually work when you have four million users funneling through it?," Gibbs said. "What we're seeing is a lot of people having to go back and rework their entire code base from Python to C++ as they get closer to production." Summary of partner demos At GDC, Inworld will be showcasing several key partner demos that highlight how studios of all sizes are successfully implementing AI. These include: Additionally, Inworld will feature two Inworld-developed technology showcases: The critical barriers blocking AI games from production and real dev solutions Below are seven of the key challenges that consistently prevent AI-powered games from making the leap from promising prototype to shipped product. Here's how studios of all sizes used Inworld to break through these barriers and deliver experiences enjoyed by millions. The real-time wall: Streamlabs Intelligent Agent The developer problem: Non-production ready cloud AI introduces response delays that break player immersion. Unoptimized cloud dependencies result in AI response times of 800 milliseconds to 1,200 milliseconds, making even the simplest interactions feel sluggish. All intelligence remains server-side, creating single points of failure and preventing true ownership, yet most developers can find few alternatives beyond this cloud-API-only AI workflow that locks them into perpetual dependency architectures. The Inworld solution: The Logitech G's Streamlabs Intelligent Streaming Agent is an AI-driven co-host, producer, and technical sidekick that observes game events in real time, providing commentary during key moments, assisting with scene transitions, and driving audience engagement -- letting creators focus on content without getting bogged down in production tasks. "We tried building this with standard cloud APIs, but the 1-2 second delay made the assistant feel disconnected from the action," said the Streamlabs team. "Working with Inworld, we achieved 200 millisecond response times that make the assistant feel present in the moment." Behind the scenes, the Inworld Framework orchestrates the assistant's multimodal input processing, contextual reasoning, and adaptive output. By integrating seamlessly with third-party models and the Streamlabs API, Inworld makes it easy to interpret gameplay, chat, and voice commands, then deliver real-time actions -- like switching scenes or clipping highlights. This approach saves developers from writing custom pipelines for every new AI model or event trigger. This isn't just faster -- it's the difference between an assistant that feels alive versus one that always seems a step behind the action. The success tax: The Last Show The developer problem: Success should be a cause for celebration, not a financial crisis. Yet, for AI-powered games, linear or even increasing unit costs mean expenses can quickly spiral out of control as user numbers grow. Instead of scaling smoothly, developers are forced to make emergency architecture changes, when they should be doubling down on success. The Inworld solution: Little Umbrella, the studio behind Death by AI, was no exception. While the game was an instant hit-reaching 20 million players in just two months - the success nearly bankrupted the studio. "Our cloud API costs went from $5K to $250K in two weeks," shares their technical director. "We had to throttle user acquisition -- literally turning away players -- until we partnered with Inworld to restructure our AI architecture." For their next game, they decided to flip the script, building with cost predictability and scalability in mind from day one. Introducing The Last Show, a web-based party game where an AI host generates hilarious questions based on topics chosen or customized by players. Players submit answers, vote for their favorites, and the least popular response leads to elimination - all while the AI host delivers witty roasts. The Last Show marks their comeback, engineered from the ground up to maintain both quality and cost predictability at scale. The result? A business model that thrives from success rather than being threatened by it. The quality-cost paradox: Status The developer problem: Better AI quality often correlates with higher costs, forcing developers into an impossible decision: deliver a subpar player experience or face unsustainable costs. AI should enhance gameplay, not become an economic roadblock. The Inworld solution: Wishroll's Status (ranking as high as No. 4 in the App Store Lifestyle category) immerses players in a fictional world where they can roleplay as anyone they imagine -- whether a world-famous pop star, a fictional character, or even a personified ChatGPT. Their goal is to amass followers, develop relationships with other celebrities, and complete unique milestones. The concept struck a chord with gamers and by the time the limited access beta launched in October 2024, Status had taken off. TikTok buzz drove over 100,000 downloads with many gamers getting turned away, while the game's Discord community ballooned from a modest 100 users to 60,000 within a few days. Only two weeks after their public beta launch in February 2025, Status surpassed a million users. "We were spending $12 to $15 per daily active user with top-tier models," said CEO Fai Nur, in a statement. "That's completely unsustainable. But when we tried cheaper alternatives, our users immediately noticed the quality drop and engagement plummeted." Working with Inworld's ML Optimization services, Wishroll was able to cut AI costs by 90% while improving quality metrics. "We saw how Inworld solved similar problems for other AI games and thought, 'This is exactly what we need,'" explained Fai. "We could tell Inworld had a lot of experience and knowledge on exactly what our problem was - which was optimizing models and reducing costs." "If we had launched with our original architecture, we'd be broke in days," Fai explained. "Even raising tens of millions wouldn't have sustained us beyond a month. Now we have a path to profitability." The agent control problem: Partnership with Virtuos The developer problem: Even with sustainable performance benchmarks met, complex narrative games still require sophisticated control over AI agents' behaviors, memories, and personalities to deliver deeply immersive and engaging experiences to gamers. Traditional approaches either lead to unpredictable interactions or require prohibitively complex scripting, making it nearly impossible to create believable characters with consistent personalities. The Inworld solution: Inworld is partnering with Virtuos, a global game development powerhouse known for co-developing some of the biggest triple-A titles in the industry like Marvel's Midnight Suns and Metal Gear Solid Delta: Snake Eater. With deep expertise in world-building and character development, Virtuos immediately saw the need for providing developers with precise control over the personalities, behaviors, and memories of AI-driven NPCs. This ensures storytelling consistency and players' choices to dynamically influence the narrative's direction and outcome. Inworld's suite of generative AI tools provides the cognitive core that brings these characters to life while equipping developers with full customization capabilities. Teams can fine-tune AI-driven characters to stay true to their narrative arcs, ensuring they evolve logically and consistently within the game world. With Inworld's tools, Virtuos can focus on what they do best-creating rich, immersive experiences. "At Virtuos, we see AI as a way to enhance the artistry of game developers and accurately bring their visions to life," said Piotr Chrzanowski, CTO at Virtuos, in a statement. "By integrating AI, we enable developers to add new dimensions to their creations, enriching the gaming experience without compromising quality. Our partnership with Inworld opens the door to gameplay experiences that weren't possible before." A prototype showcasing the best of both teams is in the works, and interested media are invited to stop by the Virtuos booth at C1515 for a private demo. The immersive dialogue challenge: Winked The developer problem: Nanobit's Winked is a mobile interactive narrative experience where players build relationships through dynamic, evolving conversations, including direct messages with core characters. To meet player expectations, the player-facing AI-driven dialogue had to exceed what was possible even with frontier models -- offering more personal, emotionally nuanced, and stylistically unique interactions. Yet, achieving the level of quality was beyond the capabilities of off-the-shelf models, and the high costs of premium AI solutions made scalability a challenge. The Inworld solution: Using Inworld Cloud, Nanobit trained and distilled a custom AI model tailored specifically for Winked. This model delivered superior dialogue quality-more organic, personal, and contextually aware than off-the-shelf solutions -- while keeping costs a fraction of traditional cloud APIs. The AI integrated seamlessly into Winked's core game loops, enhancing user engagement while maintaining financial viability. Beyond improving player immersion, this AI-driven dialogue system remembers past conversations and carries the storyline forward, providing the player with relationships that evolve as chats progress. This in turn encourages players to engage in longer conversations and return more frequently as they grow closer to characters. The multi-agent orchestration challenge: Realistic multi-agent simulation The developer problem: Creating living, believable worlds requires coordinating multiple AI agents to interact naturally with each other and the player. Developers struggle to create social dynamics that feel organic rather than mechanical, especially at scale. The Inworld solution: Our Realistic Multi-agent Simulation demonstrates how to effectively orchestrate multiple AI agents into cohesive, living worlds using Inworld. By implementing sophisticated agent coordination systems, contextual awareness, and shared environmental knowledge, this simulation creates believable social dynamics that emerge naturally rather than through scripted behaviors. Whether forming spontaneous crowds around exciting in-game events, reacting to shared group emotes, or engaging in multi-character conversations, these autonomous agents showcase how proper agent orchestration enables emergent, lifelike behaviors at scale. This technical demonstration underscores the potential for deep player immersion and sustained engagement by bringing social hubs to life -- where multiple characters interact with consistent personalities, mutual awareness, and collective response patterns that create the feeling of a truly living world. The hardware fragmentation challenge: On-device Demo The developer problem: AI features optimized for high-end devices fail on mainstream hardware, forcing developers to either limit their audience or compromise their vision. AI vendors also obscure critical capabilities required for on-device inference (distilled models, deep fine-tuning and distillation, runtime model adaptation) to maintain control and protect recurring revenue. The Inworld solution: While on-device is the key to a more scalable future of AI and games, AI hardware in gaming doesn't have a one-size-fits-all solution. Ensuring consistent performance and accessibility for users on various devices can easily drive up complexity and cost. To achieve scalability, AI solutions must adapt seamlessly across diverse hardware configurations. Our on-device demo showcases an AI-powered cooperative gameplay running seamlessly across three hardware configurations: This demo isn't about theoretical compatibility; it's about achieving consistent performance across diverse hardware, allowing developers to target the full spectrum of gaming devices without sacrificing quality. The development difference: Going beyond prototypes The gap between prototype and production is where most AI game projects collapse. While out-of-the-box plugins are useful for prototyping, they break under real-world conditions: "We've watched incredible AI game prototypes die in the transition to production for four years now," says Evgenii Shingarev, VP of Engineering at Inworld, in a statement. "The pattern is always the same: impressive demo, enthusiastic investment, then the slow realization that the economics and technical architecture don't support real-world deployment." At Inworld, we've worked relentlessly to close this prototype-to-production gap, developing solutions that address the real-world challenges of shipping and scaling AI-powered games -- not just showcasing impressive demos. At GDC, Inworld is excited to share experiences that don't just make it to launch, but thrive at scale, said Gibbs. The company's booth is at C1615. Instead of talking about the future of gaming with AI, we'll show the real systems solving real problems, developed by teams who have faced the same challenges you're encountering, Gibbs said. The path from AI prototype to production is challenging, but with the right approach and partners who understand what it takes to ship AI experiences that players love, it's absolutely achievable, Gibbs said. Session with Jim Keller of Tenstorrent: Breaking down AI's unsustainable economics: Jim Keller, now head of Tenstorrent, is a legendary hardware engineer who headed important processor projects at companies such as Apple, AMD and Intel. He will be on a GDC panel with Inworld CEO Kylan Gibbs for a candid examination of AI's broken economic model in gaming and the practical path forward: "Current AI infrastructure is economically unsustainable for games at scale," said Keller, in a statement. "We're seeing studios adopt impressive AI features in development, only to strip them back before launch once they calculate the true cloud costs at scale." Gibbs said he is looking forward to talking with Keller on stage about Tenstorrent, which aims to serve AI applications at scale for less than 100 times the cost. The session will explore concrete solutions to these economic barriers: Drawing on Keller's deep hardware expertise from Tenstorrent, AMD, Apple, Intel, and Tesla and Inworld's expertise in real-time, user-facing AI, we'll explore how to blend on-device compute with large-scale cloud resources under one architectural umbrella. Attendees will gain candid insights into what actually matters when bringing AI from theory into practice, and how to build a sustainable AI pipeline that keeps costs low without sacrificing creativity or performance. Session with Microsoft: AI innovation for game experiences Gibbs will also join Microsoft's Haiyan Zhang and Katja Hofmann to explore how AI can drive the next wave of dynamic game experiences. This panel bridges research and practical implementation, addressing the critical challenges developers face when moving from prototypes to production. The session showcases how our collaborative approach solves industry-wide barriers preventing AI games from reaching players - focusing on proven patterns that overcome the reliability, quality, and cost challenges most games never survive. I asked how Gibbs could convince a game developer that AI is a train they can get on, and that it's not a train coming right at them. "Unfortunately, there's lots of other partners that we weren't able to share publicly. A lot of the triple-A's [are quiet]. It's happening, but it requires a lot of work. We're starting to engage with developers where the requirements are being creative. If they have a game that they're planning on launching in the next year or two years, and they don't have a clear line of sight on how to do that efficiently at scale or cost, we can work with them on that," Gibbs said. "There is a fundamentally different ways that it can be structured and integrated into games. And we're going to have a lot more announcements this year as we're trying to make them more self serve."
[3]
Google Cloud is helping gaming startups use AI change the industry - SiliconANGLE
Google Cloud is helping gaming startups use AI change the industry Artificial intelligence is making significant advances in the gaming industry by streamlining development and creating more immersive experiences, from reactive characters to dynamic environments and personalized gameplay. At Game Developer Conference 2025 today, Google Cloud revealed how AI startups and AI-native gaming studios are using its AI architecture to build games in new ways to overcome the biggest challenges in the industry. To understand how the gaming industry is evolving, SiliconANGLE spoke with Jack Buser, director for games at Google Cloud, and executives at AI gaming startups about how the industry is changing and what developers and players can expect. "First and foremost, there's AI for game development, and then there's another category, which is AI for new player experiences," Buser said. "We saw game companies start to put generative AI into actual production pipelines over a year ago now." In many cases, game studios can use generative AI to handle mundane tasks that consume development time, freeing up developers and creative workers to focus on more innovative aspects of game design. "We see game companies actually use this to accelerate their game development cycles," he explained. One company helping gaming studios accelerate their development cycles is Common Sense Machines Inc., which builds state-of-the-art generative AI models and agents that let users produce controllable, production-ready 3D artwork from images, text and sketches. "Creating anything 3D can take hours or days, even for professionals right now," Tejas Kulkarni, co-founder and chief executive of Common Sense Machines, told SiliconANGLE. "A single asset, like a chair, could 16 hours to make before you can see it on a website or in a game or in an industrial simulation platform." The company describes its tool as a "3D copilot" that lets users upload an image or a sketch, get their 3D model, fine-tune it and then export it into their development workflow. Common Sense Machines works with an AI-native gaming studio named Cosmic Lounge, powered by Google Cloud's fully managed model development and engineering platform Vertex AI. CSM's service helps generate visually engaging promotional content for marketing, develop seasonal decorations for in-game content, such as Christmas-themed pet designs, and rapidly refresh game content with new variations to keep players engaged. Testing can be tedious and slow, requiring long hours repeating the same boring tasks. That's why Nunu.ai created multimodal AI agents with Google Gemini models that can navigate through games similar to human players to identify bugs to reduce tedium and burnout for testers. "Why AI engines? It mostly comes down to that games are interactive environments. So, it's very hard to write a test script because it changes," Nicolas Muntwyler, co-founder of Nunu.ai told SiliconANGLE. "Writing a script sometimes doesn't work. If there's suddenly a new event, a new pop-up and it just breaks, that's a lot of maintenance." Using Nunu.ai's AI agents, a tester can simply give the agent a goal using natural language and let it explore the game. It will then play like a real person -- using items, shooting guns, crafting and interacting with the user interface -- just as a player would. These are the kinds of tasks a tester would need to perform whenever a game version or function changes. However, they are also the most monotonous parts of game testing. A tester could ask the AI to complete a specific task, such as crafting an axe or playing through a chapter of the game. Muntwyler noted that the AI is slightly slower than a human or script because it mimics how a person would naturally play. If it succeeds, it provides a pass checkmark; if it encounters an error, it reports the issue, describes how to reproduce the bug, and provides a visual trace so testers can follow along. "What we want to do is take the large tester 'to-do list,' automate as much as we can, and free up testers to do testing that is valuable," said Muntwyler. "In my opinion, that's testing if the game feels fun to play or not, something AI can't do. That's exactly where humans would shine and they could do a great job at it." Series Entertainment Inc., an AI-native game studio, has been putting Google Cloud's AI infrastructure to streamline its content production and enable its developers to build games faster than traditional teams. It's also using AI to drive reactive gameplay experiences that adjust to what players do. "We coined a term which drives our strategy, called 'Living Games,'" Buser said. "This represents what happens when these worlds collide. You're actually able to not just transform what a game company and game development studio can look like and how they behave, but also you can transform the player experience." Series Entertainment uses AI to free up its development team from repetitive tasks that take away time from more creative tasks to allow them to find time to tell stories. The studio also intends to build on the "living games" aspect by building generative AI directly into the gameplay loop by adding it to the game mechanics. "Something we're going to be releasing next year [is] we have AI built into the game mechanics that interact with the player to help tell the story," Josh English, chief technology officer of Series Entertainment, told SiliconANGLE. "So based on the player's interactions with the game environment itself, it can adapt to tell an immersive story together." He said if you look at an ordinary game studio, a great deal of work is put into producing new content for a game to keep the player base happy. Adding AI to the equation, it can tailor the game and gameplay experience on the fly. "It radically reduces the cost involved and the time involved in producing new content," English added. "At the same time, on the back end, it's still empowering our creators to build new experiences as well. So, I think it's bringing the best of both worlds."
Share
Share
Copy Link
Google Cloud partners with gaming companies to leverage AI for innovative game development and immersive player experiences, showcasing the technology's potential to transform the gaming industry.
Google Cloud is at the forefront of a gaming revolution, partnering with innovative startups to harness the power of artificial intelligence (AI) in game development and player experiences. At the Game Developers Conference (GDC) 2025, several collaborations were highlighted, demonstrating the transformative potential of AI in the gaming industry 123.
Berlin-based Klang Games has partnered with Google Cloud to create "Seed," a massively multiplayer online (MMO) game that simulates a futuristic human society. Utilizing Google Cloud's technologies, including Google Kubernetes Engine (GKE), Vertex AI, and the Gemini model, Seed features AI-driven virtual humans called "Seedlings" that live, interact, and evolve in real-time 1.
Mundi Vondi, CEO of Klang, emphasized the unprecedented scale of their simulation: "We are building the largest simulation of humanity's future ever attempted, with an unprecedented level of detail" 1. The game aims to launch into early access this year, showcasing the potential of AI to create dynamic, persistent virtual worlds.
Inworld AI, led by CEO Kylan Gibbs, is addressing the challenges of implementing AI in production-ready games. The company is demonstrating how developers have overcome structural barriers to ship AI-powered games that millions of players are already enjoying 2.
Gibbs highlighted the transition from prototypes to production: "We've seen a transition over the last few years at GDC. Overall, it's a transition from demos and prototypes to production" 2. Inworld AI is focusing on solving larger-scale problems, such as cost management and efficient AI processing, to make AI integration in games more feasible and affordable.
Google Cloud is supporting various AI-native gaming studios and tool developers:
Common Sense Machines: This startup is accelerating 3D asset creation with its "3D copilot" tool, significantly reducing development time for game assets 3.
Nunu: The company has created multimodal AI agents using Google Gemini models to automate game testing, reducing tedium for human testers 3.
Series Entertainment: This AI-native game studio is using Google Cloud's AI infrastructure to streamline content production and enable reactive gameplay experiences 3.
Jack Buser, director for games at Google Cloud, introduced the concept of "Living Games," which represents the convergence of AI-driven development and player experiences. This approach allows for more dynamic, adaptive gameplay and storytelling 3.
Josh English, CTO of Series Entertainment, explained their implementation: "We have AI built into the game mechanics that interact with the player to help tell the story. So based on the player's interactions with the game environment itself, it can adapt to tell an immersive story together" 3.
Despite the excitement surrounding AI in gaming, there are still challenges to overcome. Gibbs noted increased skepticism among developers, attributing it to difficulties in scaling AI solutions from prototypes to production 2. However, the industry continues to push forward, with Google Cloud and its partners working to address these challenges and unlock the full potential of AI in gaming.
Reference
As generative AI makes its way into video game development, industry leaders and developers share their perspectives on its potential impact, benefits, and challenges for the future of gaming.
3 Sources
3 Sources
Google Cloud showcases its AI agent ecosystem and multi-cloud strategy at its annual Cloud Next conference, positioning itself as a leader in enterprise AI solutions.
6 Sources
6 Sources
AI is transforming the gaming industry, from game creation to hardware advancements. This story explores how AI is being used to develop PC games and Nvidia's latest AI-focused innovations.
2 Sources
2 Sources
Game developers are exploring the use of AI to create more interactive and lifelike non-player characters (NPCs) in video games. This technological advancement promises to enhance player immersion and create more dynamic gaming experiences.
7 Sources
7 Sources
A comprehensive look at the AI landscape in 2024, highlighting key developments, challenges, and future trends in the rapidly evolving field.
8 Sources
8 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved