9 Sources
9 Sources
[1]
Pokémon Go's AR data has been turned into centimeter-accurate navigation for delivery robots
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. Connecting the dots: Pokémon Go's global AR craze is now steering something far more prosaic than virtual Pikachu: real delivery robots trying to find the right doorway on a crowded city block. The same location data and street-level imagery that once anchored monsters to sidewalks and plazas have been repurposed by Niantic. Coco Robotics is now using that technology to guide its sidewalk bots through dense urban areas where GPS alone is too unreliable to keep them on course. Niantic Spatial, an AI spinout formed in 2025, has turned years of mobile gaming data into what it describes as a high-precision world model of the physical environment. The company is now commercializing that work through a visual positioning system that can locate devices to within a few centimeters using only camera input and map context. Its first large-scale deployment is with Coco Robotics, a last-mile delivery startup operating roughly a thousand sidewalk robots across US and European cities, where satellite signals are often too noisy to support reliable autonomy. The technical problem Niantic Spatial is tackling is straightforward to describe but difficult to solve. GPS degrades badly in dense cities, with position estimates drifting by tens of meters as signals bounce off glass and concrete. That level of error can place a delivery bot on the wrong block or even the wrong side of the street. Coco's robots, which travel at about five miles per hour and carry loads ranging from multiple extra-large pizzas to several grocery bags, must hit promised arrival times and precise pickup and drop-off points if they are to match or exceed human couriers. Niantic Spatial's alternative is a visual positioning system (VPS) that localizes a device based on what it sees rather than relying on radio signals alone. Over the past several years, the company has aggregated data from Pokémon Go and its earlier augmented-reality title, Ingress. Both games encouraged players to visit specific real-world locations such as gyms, battle arenas, and other points of interest. Those gameplay loops produced a dense global dataset of images captured in urban settings, each paired with rich metadata from the phone, including latitude and longitude, camera orientation, device pose, motion data, and other sensor readings. Niantic Spatial says it trained its models on roughly 30 billion images, heavily clustered around more than a million "hot spot" locations photographed from many angles, at different times of day, and under varied weather conditions. Because each frame is tied to a centimeter-scale pose estimate, the training set effectively functions as a multi-view 3D sampling of city streets, crosswalks, storefronts, and building facades. The company then trains its model to infer an exact location and orientation from a handful of current images, even in areas that are less thoroughly covered than those original hot spots. Also read: The rise of delivery robots is sparking vandalism, protests, and debate For Coco, that means its robots can fuse GPS with camera-based localization from Niantic Spatial's model. Each unit carries four hip-height cameras that look in all directions, a perspective different from a person holding up a phone but one that Coco says was straightforward to adapt to the existing data. Coco's robots have already logged hundreds of thousands of deliveries and more than a million miles across Los Angeles, Chicago, Miami, Jersey City, and Helsinki, giving the company a baseline against which to measure improvements in reliability from the new system. Visual positioning itself is not new, but it has historically been constrained by the availability and coverage of high-quality imagery. Niantic Spatial's bet is that the sheer volume and diversity of its crowdsourced data gives it an advantage over rivals that build maps primarily using their own sensor fleets. Other delivery-robot vendors, such as Starship Technologies, use their sensors to build local 3D maps of edges, poles, and building outlines as they move through an area, then rely on those maps for subsequent runs. By contrast, Niantic Spatial aims to maintain a global, shared geospatial model and expose it through an API to any robot, phone, or headset that needs to know exactly where it is. The company calls that model a "living map": a virtual representation of the world that is constantly updated as machines move through it. As Coco's robots and other future partners traverse sidewalks and streets, their sensors can contribute fresh observations that refine and extend Niantic Spatial's underlying maps. The aim is not just geometric accuracy but also semantic understanding, with objects tagged and described in ways that make sense to machines. Niantic's leaders describe this effort as a continuation of long-running work in digital mapping rather than a departure from it. As mapping has evolved from 2D to 3D and into dynamic "digital twin" simulations, the core link between map coordinates and physical locations has remained. What is changing is the primary consumer of those maps. Increasingly, it is machines rather than humans. In that view, the same spatial intelligence that once kept virtual Pikachu aligned with the sidewalk is now being repurposed to keep a 100-pound delivery robot on course through traffic and weather.
[2]
Pokémon GO Players Unknowingly Helped Build a 30 Billion AR Image Map of the World
In 2016, Niantic launched Pokémon GO, an augmented reality (AR) game that took the world by storm. Now, 10 years later, spin-off company Niantic Spatial is using all the visual data gathered by players to build a massive geospatial model that can power robots and AI. Niantic Spatial is an AI company spun off of Niantic when it was sold to Scopely last May. Scopely is owned by the Savvy Games Group, which is part of Saudi Arabia's Public Investment Fund. Niantic Spatial, however, is owned by the original Niantic Investors, with some investment from Scopely. As explained by the MIT Technology Review, Pokemon GO (along with Pikmin Bloom and Monster Hunter Now) had access to players' smartphone cameras during gameplay. That access was not going to be ignored, and the games pulled in an astronomical amount of visual data. All three games were hugely popular, but GO, in particular, was a massive success. "Five hundred million people installed that app in 60 days," Brian McClendon, CTO at Niantic Spatial, tells MIT. It continued to have a large user base for a decade, each one of those players continuing to grab visual data while chasing Pokémon around the planet. The result is what it calls its Large Geospatial Model, or LGM, which is at the core of its Visual Positioning System, or VPS, that is based on the 30 billion images captured by Niantic Games users over 10 years. Niantic Spatial launched using that trove of what it characterizes as "crowdsourced data" to build its LGM. Because it has access to those 30 billion AR images of urban landmarks that have been geotagged with extremely accurate location markers, thanks to the hundreds of millions of Pokémon GO players, the company can pinpoint a user's exact location on a map to within a few centimeters based on landmarks and buildings within a field of view. "Our technology is based on a third-generation digital map that captures the content of the world at a level of fidelity never before achieved, enabling both people and machines to understand it in new exciting ways. This is part of the connective tissue that will enable AI to meaningfully understand and interact with the physical world," Niantic Spatial says. This data is already being used in a collaboration with Coco Robotics, a startup that builds delivery robots. "The promise of last-mile robotics is immense, but the reality of navigating chaotic city streets is one of the hardest engineering challenges," John Hanke, CEO of Niantic Spatial, says. "We are thrilled to be working with Coco Robotics as our first robotics partner and deploying spatial intelligence to help solve these challenges head-on." Coco cannot rely on GPS in cities because radio signals are untrustworthy, as they can bounce off buildings and interfere with each other. "It gives us reliable access to localization services that further improve robot navigation," Zach Rash, Co-Founder and CEO of Coco Robotics, says of his company's partnership with Niantic Spatial. "It turns out that getting Pikachu to realistically run around and getting Coco's robot to safely and accurately move through the world is actually the same problem," he explains to MIT. "We know where you're standing within several centimeters of accuracy and, most importantly, where you're looking," Brian McClendon, CTO at Niantic Spatial, adds. "We had a million-plus locations around the world where we can locate you precisely." Niantic Spatial's technology is impressive, but it wouldn't have been possible without Pokémon. The more sinister aspect of this story is that Niantic was capturing this data without its users necessarily knowing it was happening, getting it for free, and now it's being used to drive robotics and, theoretically, for anything else, as long as a company is willing to pay Niantic for it.
[3]
Pokémon Go players built a 30-billion-photo map that's now training robots to deliver your pizza | Fortune
Pikachus at every street corner. Leveling up before getting into the gym. "Pokémon Go to the polls." You remember this era well: Pokémon Go became a frenzy, with hundreds of millions taking to the streets for their chance to snap up the rare Azelf or special edition Charizard. Now, not only does it seem that Pokémon Go took the world by storm, but it also was using crowdsourced data to map it. Over the past decade, Pokémon Go players voluntarily submitted photos and short videos of public landmarks, street corners, storefronts, and urban intersections -- all coming together to create a dataset that now stands at 30 billion images captured at ground level, across nearly every major city on the planet. Niantic Spatial, the enterprise AI and mapping division spun from Niantic Inc., has spent years converting that trove into something the robotics industry has never seen before: a photorealistic, street-level, continuously updated model of the physical world, built specifically for robots. That model is now being deployed to navigate Coco Robotics' roughly 1,000 delivery bot fleet operating in cities across the country and around the world, including Los Angeles, Chicago, Miami, Jersey City, and Helsinki, logging millions of miles of deliveries to date. Brian McClendon, Niantic Spatial's chief technology officer and one of the original creators of Google Earth, explains the data strategy plainly. "We look at the player data as very high-quality ground training data for other lower-quality datasets," McClendon told Fortune in a statement. "The long-term philosophy of Niantic Spatial is that we can solve these hard problems of localization, reconstruction, and semantics by using very concentrated places to train models and then use much more broadly available data at lower resolution to be able to localize, visualize, and understand from 'bad' data." The 30 billion Pokémon Go images aren't just a map: They are a master key that unlocks the potential of how to create a real-world, real-time map. The player scans teach the model what precision looks like -- it's so precise, in fact, that it can even signal when the input is imperfect. It's a strategy that positions Niantic Spatial less as a gaming company that pivoted and more as the most ambitious mapping operation ever assembled -- one that was funded entirely by its own users' enthusiasm for catching digital creatures. Niantic Spatial's Visual Positioning System, or VPS, solves a problem that has quietly stunted the autonomous delivery industry. GPS, the backbone of most navigation systems, doesn't fare that well in dense urban environments, where tall buildings interfere with satellite signals. For a delivery robot that needs to drop food at a precise doorstep, being several feet off means unhappy customers complaining their burger is cold -- or in their neighbor's tummy. Instead, the VPS bypasses satellites entirely, comparing live camera feeds from the robot against its vast image database to determine position in real time. "The model will work in real time, taking in images from the robot and comparing them to both publicly available as well as proprietary datasets we've collected to determine the robot's global position and heading," a Niantic Spatial spokesperson told Fortune in a statement. The company knew where this tech performs best: "Niantic Spatial's VPS is particularly resilient in urban canyons where GPS performs badly." "Our initial VPS was built using scans that users choose to take in games -- but no single source defines the model," the Niantic Spatial spokesperson said. Player participation was always opt-in: users had to actively choose to submit a short video scan of a specific public landmark. Today, the model increasingly learns from the data Niantic Spatial's enterprise customers generate themselves. The underlying engine -- a large geospatial model, or LGM, trained on billions of posed images and hundreds of millions of real-world scans -- powers three capabilities: reconstructing spaces as navigable 3D models, localizing machines within those spaces, and understanding environments semantically. As CEO John Hanke wrote in a recent blog post: "For the past several years, we've been building a large geospatial model that acts as a living, breathing map of the world, one that is native to robots and AI." For Coco CEO Zach Rash, the problem is with robots' critical thinking skills (or lack thereof). "Robots don't have the same intuition yet as a human, where a human can understand, 'My GPS isn't really working, but I understand that's probably the right place to go,'" Rash told Fortune. "We need the robot to have that sort of intuition." "When we go into really dense areas with high rises, that's where the VPS solution can be really helpful," Rash said. "Our GPS and our existing solutions might fail in that sort of environment." The stakes, he noted, are felt by customers at the very last moment of a delivery: "It is a terrible customer experience if the robot parks in the wrong place waiting to receive that order."
[4]
Turns out all your Pokémon Go data will be used to train robots
When Pokémon Go debuted in 2016, it became an overnight sensation. From London to New York, it felt as though everyone had installed Niantic's augmented reality Pokémon mobile app and had taken to the streets in a frenzied attempt to catch them all. While Niantic no longer owns Pokémon Go, which Scopely acquired in March 2025, the data collected by Niantic during those years is now being used to train robots. This news comes from Niantic Spatial's announcement of a new partnership with Coco Robotics, which has developed an urban robot designed to deliver food through complex urban landscapes. With this collaboration, Coco Robotics will be leaning on Niantic's expertise in "spatial AI and its Visual Positioning System (VPS)" to further improve Coco Robotics' titular delivery robot, Coco: a fleet of around 1,000 flight-case-size robots built to carry up to eight extra-large pizzas or four grocery bags, deployed in Los Angeles, Chicago, Jersey City, Miami, and Helsinki. One of the biggest challenges Coco faces is that the GPS signal can be weak in cities where radio waves bounce off big buildings. "The promise of last-mile robotics is immense, but the reality of navigating chaotic city streets is one of the hardest engineering challenges," said John Hanke, CEO of Niantic Spatial, an AI offshoot that Niantic founded in May 2025, via the company's blog. "We are thrilled to be working with Coco Robotics as our first robotics partner and deploying spatial intelligence to help solve these challenges head-on." Hanke also added that "It turns out that getting Pikachu to realistically run around and getting Coco's robot to safely and accurately move through the world is actually the same problem." All the data collected by people playing Pokémon Go and its previous augmented reality game, Ingress, is now being used to build an accurate model of the cities that Coco has to navigate. With Niantic's VPS system, Pokémon Go can determine a player's location in the world from their surroundings rather than relying on a player's GPS location. By having players use their phones at different angles, Niantic was able to scan real-world locations and landmarks, with players gathering the data they needed to ensure better accuracy across a range of conditions, such as height, angle, and weather. In 2020, Pokémon Go added a feature called "Field Research," which rewarded players for taking photos and scans of their surroundings in exchange for items and rare Pokémon. Whether they understood the implications or not (Niantic has always been open about building datasets), players helped feed Niantic data that was later used to train fast-food delivery robots. While Niantic never hid that it was collecting data, it's not hard to imagine that some Pokémon fans won't be too happy about that data being used by an AI company to train robots. Hopefully, the Coco fleet will at least have a better sense of direction than Leon from the Pokémon franchise.
[5]
Pokémon Go Players Helped Map the World -- Now That Data Is Training Delivery Robots - Decrypt
Niantic says earlier versions of the system incorporated optional scans submitted by players through its games. The millions of players who spent years scanning landmarks while playing Pokémon Go helped build the mapping technology now guiding delivery robots through city streets, according to a recent report by MIT Technology Review. The news comes as AI and robotics developers work to give robots a more accurate sense of their surroundings so they can move through cities without relying solely on GPS. In February, Niantic Spatial partnered with Santa Monica, CA-based Coco Robotics to provide navigation technology for the company's autonomous delivery machines. Pokémon Go, released in 2016, sent players into real-world locations to catch digital creatures and interact with landmarks through their phone cameras. Players could also submit optional scans of public landmarks such as statues or buildings to improve the system's spatial mapping. That mapping technology now powers San Francisco-based Niantic Spatial's Visual Positioning System, which determines location by analyzing nearby physical landmarks. In May 2025, Niantic Spatial spun off from Niantic Inc to become its own company. "Our initial VPS was built using scans that users choose to take in games -- but no single source defines the model," a Niantic spokesperson told Decrypt. "What makes our approach distinctive is the combination of scale and ground-level detail-and increasingly, the data our customers are generating is what drives accuracy in the environments that matter most to them." Founded in 2020, Coco Robotics operates small autonomous robots that deliver food and retail orders across city neighborhoods in Los Angeles, Chicago, Miami, and Helsinki. Due to increased traffic, construction, and other hazards, robotics companies are increasingly exploring vision‑based positioning systems to complement GPS navigation. With GPS, signals can bounce off buildings or disappear entirely in narrow streets, making precise navigation difficult for autonomous machines. VPS tackles this by comparing camera images with detailed visual maps of the environment and can provide much more reliable location data in GPS‑challenged conditions. Critics, however, argue that the dataset behind Niantic's spatial AI was built by players who may not have realized how their scans could be used. "143 million people thought they were catching Pokémon," one user wrote on X. "They were actually building one of the largest real-world visual datasets in AI history." "The killer move wasn't the map, it was the incentive design," another wrote. "Pokémon Go turned millions of players into unpaid edge-case hunters and made the data exhaust feel like play." Despite these concerns, Niantic reiterated that participation in scanning was voluntary. "Players could choose to submit anonymized scans of public places to help improve VPS," the spokesperson said. "This scanning was and remains entirely optional," they said, adding that scans are not connected to player accounts.
[6]
Fact check: Are Pokémon GO players unwittingly helping to train AI?
The hit mobile game Pokémon GO has come under scrutiny following claims that images captured within the app may have been used to train AI systems developed by its creator, without players' consent. However, these images were not collected entirely without users' awareness. Following its launch in 2016, Pokémon Go quickly became a phenomenon in Europe and around the world, turning streets of Brussels, Paris and Rome into augmented reality playgrounds where players could hunt virtual creatures such as Pikachu, Dragonite or Eevee. The app is still popular today (with more than 100 million players in 2024, according to Scopely, parent company of game developer Niantic), generating headlines and, in some instances, dubious claims online. According to MIT Technology Review, Niantic's AI-focused division, Niantic Spatial, has used images collected through gameplay to help train its systems, which are designed to build detailed 3D maps of real-world environments. Posts on X attracting millions of views have gone further, suggesting that Niantic, unbeknownst to unwitting players, may be using their Sunday strolls to capture visual data snapped by users to develop visual navigation systems for delivery robots. However, the use of this data was not entirely carried out without players' knowledge, nor was it simply gathered as they wandered the streets in search of rare Pokémon. While Pokémon Go has used augmented reality (AR) since its launch to bring the Pokémon universe into the real world, it was only in 2020 that Niantic introduced dedicated AR mapping features. This function allows players to scan real-world locations and objects by moving around them while their smartphone camera records visual data. Crucially, this feature is not available to all players from the outset. It is unlocked only once users reach level 20 in the game. This means images are not automatically captured in the background as players move around. Instead, users must actively choose to engage with the feature. Niantic told Euronews' fact-checking team, The Cube, that players needed to choose to submit scans and videos of public locations anonymously to help improve its Niantic Spatial Visual Positioning System (VPS). The company maintains that participation is entirely optional, requiring users to deliberately select and scan specific landmarks, such as statues or notable features. The Cube tested the game and found that when a smartphone camera is pointed at a statue in Brussels' Parc du Cinquantenaire, a message appears stating that users will contribute to the development of augmented reality mapping technology and that their data will be shared with a third-party service. The message adds that the data collected is used to create 3D models of real-world locations and to support the development of the technology and related services. This process is outlined in Niantic's Terms of Service under a section titled "Rights Granted by You - AR Content". The developer states that by choosing to use the AR scanning feature, users grant Niantic a non-exclusive right to use the collected images to improve its services. MIT Technology Review has reported that Niantic Spatial is actively using images collected from Pokémon GO players to develop its latest products. The company told The Cube that it has trained more than 50 million neural networks to date, based on around 30 billion images. Niantic has developed a Visual Positioning System (VPS), which it says can deliver "precise, vision-based positioning and orientation anywhere in the world, including places where GPS is unavailable or unreliable." The technology has effectively allowed the company to build a highly detailed 3D model of the real world. However, Niantic Spatial does not rely solely on augmented reality data from Pokémon GO. The company also states on its website that it incorporates spatial data from other sources, including robots, drones, and satellites. In early March, Niantic announced a partnership with Coco Robotics, an urban robot delivery platform, to deploy its spatial AI technology and VPS at scale. Coco Robotics operates robots capable of delivering fresh groceries, electronics, and hot meals in cities including Los Angeles, Chicago, Jersey City, Miami, and Helsinki. Since 2018, the company has partnered with DashMart, an online delivery platform. The company has now introduced a new generation of more robust delivery robots designed to withstand the challenges of urban streets. However, these robots have historically relied on GPS, which often provides limited accuracy in dense city environments. This is where Niantic Spatial's technology comes in. The collaboration aims to integrate Niantic's spatial mapping and VPS into autonomous delivery robots, enabling them to navigate complex urban landscapes more effectively. By leveraging detailed 3D maps and vision-based positioning, the robots can move with greater precision through city streets when delivering items directly to customers.
[7]
If you're still playing Pokémon Go, you're helping train data-gobbling GPS AI for Niantic Spatial
Everyone in tech seems to be into AI these days, developer Niantic doesn't want to be left behind. With the help of images captured through the Pokémon Go app, the company is helping its spinout Niantic Spatial endeavour, which focuses on "geospatial AI" and digital maps of the physical world. Niantic Spatial's long-term goal is to enable machines, robots, and AR glasses to understand, navigate, and interact with the physical world with centimeter-level precision. Via GamesRadar, we've learned about how Niantic is using images collected through the Pokémon Go app to perfect and reshape digital maps of our surroundings. As always, the payment for free-to-play games and apps isn't immediately obvious, and before you start complaining about being misled, Niantic's term and conditions do state images "are banked as mapping data." And per this new report, it seems we know precisely what this data is being used for. MIT Technology Review's article on Niantic's AI spinout, which includes chatter with CTO Brian McClendon, is quite revealing. It elaborates how Niantic Spatial is collaborating with companies like Coco Robotics, "a startup that deploys last-mile delivery robots in a number of cities across the US and Europe." The partnership started when "everybody thought that AR was the future." While that didn't pan out, it doesn't mean AR is useless. "Niantic Spatial has trained its model on 30 billion images captured in urban environments." Usually, the images themselves, together with precise metadata, can analyse recognisable environments, landmarks, iconic buildings, and whatnot to better pinpoint when and where exactly the mobile phones took them. "Maps are not only becoming more detailed; they are being used more and more by machines," says McClendon. Whereas other companies are fighting over LLMs and the (re)creation of information from scratch, Niantic is moving in a different direction. "I'm very focused on trying to re-create the real world," concluded McClendon. This may come as a surprise to some users, however, that did not read the small print and will be surprised to learn how the data from their Pokémon adventures could be used to create a Large Geospatial Model with applications that could be used by companies such as Amazon, DoorDash, or even the military industrial complex.
[8]
Pokémon Go Maker Used Billions Of Images To Train An AI Map
Pokémon Go players have long suspected that developer Niantic was selling the map data those players were creating in the course of playing the augmented reality monster catcher. It turns out they were right, and now we know that one of the clients Niantic is selling this information to is using it to train AI map models used by delivery robots. Not even Pikachu is safe from contributing to the AI dystopia. Niantic Spatial, the AI-focused offshoot of the Pokémon Go company that was formed last year when Niantic was acquired by Saudi-owned Scopely, has revealed it has used Pokémon Go’s map data to create a map model that can pinpoint a person’s location within centimeters. This would be used by delivery robot companies like Coco Robotics for more accurate navigation in places where GPS is less reliable, like cities with higher rates of signal interference. Niantic Spatial has been using data collected from Pokémon Go to build a visual positioning system that uses images and videos instead of just coordinates from a GPS. The model has been trained on over 30 billion images captured in urban environments, specifically around hot spots like gyms where players would have taken photos from many different angles and at different times of day. “We had a million-plus locations around the world where we can locate you precisely,†says Niantic Spatial CTO Brian McClendon. “We know where you’re standing within several centimeters of accuracy and, most importantly, where you’re looking.†John Hanke, CEO of Niantic Spatial, says partnering with Coco Robotics is the beginning of a much larger vision to create a virtual simulation of the world that changes as the world does, and gathers more map data from more robots using the system. We like to joke that Pokémon Go’s first summer was basically the closest the world ever came to peace, but even those good memories must be soured by the capitalist hellscape, as it turns out everyone was unknowingly doing unpaid labor for an AI company creating something that sounds like a tool for a surveillance state. The foregrounding of the use of this data to help delivery robots makes the whole thing sound like it could be a net good for people who have had trouble with getting their lunch delivered, but it fails to mask the more sinister implications of a company creating a hyper-detailed map of the world and selling it to anyone it pleases.
[9]
With over 30 billion images logged, Pokemon Go is using players' activity to train its spinout AI company to help GPS pinpointing: "I'm very focused on trying to re-create the real world"
Pokemon Go players are helping delivery robots reach consumers' doors, as Niantic Spatial says it's using images captured on the app to improve location pinpointing in partnership with companies like Coco Robotics. Although Pokemon Go makes it clear in-game that it's collecting AR mapping data, with a notification prompting players to acknowledge that they'll help add to this pool for any given location, AI company Niantic Spatial - a spinout from developer and publisher Niantic - is apparently tapping into this collection now to... help delivery robots? Speaking with MIT Technology Review, CTO Brian McClendon reveals as much. Niantic Spatial's latest model is one that can reportedly pinpoint somebody's location on a map to the dot - we're talking a few centimeters - using images of landmarks, like buildings, in the vicinity. The company wants to use this technology to help robots, like those deployed by Coco Robotics, navigate and deliver across Europe and the United States with greater precision, where GPS might be less reliable. "Everybody thought that AR was the future, that AR glasses were coming, and then robots became the audience," as McClendon states. "The urban canyon is the worst place in the world for GPS. If you look at that blue dot on your phone, you'll often see it drift 50 meters, which puts you on a different block going a different direction on the wrong side of the street." That's where Niantic Spatial's new tech would help. How does it work, though? Well, for the last few years, the AI company has been using the aforementioned data from Pokemon Go to build an accurate visual positioning system. As Niantic Spatial CEO John Hanke says, "It turns out that getting Pikachu to realistically run around and getting Coco's robot to safely and accurately move through the world is actually the same problem." Who would've thought? Niantic Spatial has trained this system on some 30 billion images snapped by players in urban environments - the sorts of places that McClendon calls "the worst" for GPS. "We had a million-plus locations around the world where we can locate you precisely," he explains. "We know where you're standing within several centimeters of accuracy and, most importantly, where you're looking." Sounds... ominous, but, hey, it works. The company has even bigger aspirations, too - if the map-making improvements keep on rolling, it'll capture everything. "We're not there yet, but we want to be there," McClendon concludes. "I'm very focused on trying to re-create the real world." That's one big task. It's certainly a unique way to go about things, too, with Charizard and Pikachu captures helping train super-advanced AI mapping tech. It's important to note that this news has sparked fears of Niantic using players who were "unknowingly," as one viral online post puts it, feeding AI tech with personal data, but this doesn't seem to be accurate at all. As mentioned earlier, Pokemon Go does make it clear that images are banked as mapping data, and that's a fact that has been public information for quite some time now. The delivery robot side of things, however, is new. Are you still playing yourself? Be sure to browse through our roundup of the most up-to-date Pokemon Go codes for a little treat in-game.
Share
Share
Copy Link
Niantic Spatial has transformed a decade of Pokémon Go player data into a precise navigation system for autonomous delivery robots. The company trained AI models on 30 billion augmented reality images captured by players, creating a Visual Positioning System that guides Coco Robotics' fleet of roughly 1,000 delivery robots through dense cities where GPS signals are unreliable.
When Pokémon Go launched in 2016, 500 million people installed the app within 60 days, taking to streets worldwide to catch digital creatures
2
. What players didn't fully realize was that their gameplay was contributing to one of the largest crowdsourced datasets in history. Over the past decade, Niantic collected approximately 30 billion augmented reality images from Pokémon Go, along with earlier titles like Ingress, capturing urban landmarks, street corners, storefronts, and building facades across nearly every major city on the planet1
. Each image came paired with rich metadata including latitude, longitude, camera orientation, device pose, and motion data, creating what amounts to a multi-view 3D sampling of urban environments1
.
Source: Fortune
Niantic Spatial, an AI spinout formed in 2025 when Scopely acquired Pokémon Go, has now commercialized this vast repository of player data
2
. The company trained its AI models on images heavily clustered around more than a million hot spot locations photographed from many angles, at different times of day, and under varied weather conditions1
. Brian McClendon, Niantic Spatial's chief technology officer and one of the original creators of Google Earth, describes the approach as using high-quality ground training data from players to solve hard problems of localization, reconstruction, and semantics3
.The technical challenge Niantic Spatial addresses stems from fundamental GPS limitations in dense urban settings. Satellite signals degrade badly in cities, with position estimates drifting by tens of meters as radio signals bounce off glass and concrete
1
. For delivery robots that must hit precise pickup and drop-off points, this level of error can place a unit on the wrong block or even the wrong side of the street1
.Niantic Spatial's Visual Positioning System (VPS) offers an alternative by localizing devices based on what they see rather than relying on satellite signals alone
1
. The system can locate devices to within a few centimeters using only camera input and map context2
. Because each training frame is tied to a centimeter-scale pose estimate, the AI models can infer an exact location and orientation from a handful of current images, even in areas less thoroughly covered than the original hot spots1
.Coco Robotics has become the first large-scale deployment partner for Niantic Spatial's technology. The company operates roughly 1,000 sidewalk robots across Los Angeles, Chicago, Miami, Jersey City, and Helsinki, with the fleet having already logged hundreds of thousands of deliveries and more than a million miles
1
. These flight-case-sized delivery robots travel at about five miles per hour and can carry loads ranging from multiple extra-large pizzas to several grocery bags1
.Source: TechSpot
Zach Rash, Co-Founder and CEO of Coco Robotics, explains that robots lack the intuition humans have when GPS fails. "Robots don't have the same intuition yet as a human, where a human can understand, 'My GPS isn't really working, but I understand that's probably the right place to go,'" Rash told Fortune
3
. Each Coco unit carries four hip-height cameras that look in all directions, fusing GPS with camera-based localization from Niantic Spatial's Large Geospatial Model1
. John Hanke, CEO of Niantic Spatial, noted that "getting Pikachu to realistically run around and getting Coco's robot to safely and accurately move through the world is actually the same problem"4
.Related Stories
Niantic Spatial characterizes its geospatial data as a "living map"—a virtual representation of the world that constantly updates as machines move through it
1
. As Coco Robotics' delivery robots and future partners traverse sidewalks and streets, their sensors contribute fresh observations that refine and extend the underlying maps1
. The company aims to expose this global, shared geospatial model through an API to any robot, phone, or headset that needs precise localization1
.This approach differs from competitors like Starship Technologies, which use their sensors to build local 3D maps of edges, poles, and building outlines for subsequent runs in specific areas
1
. Niantic Spatial's bet is that the sheer volume and diversity of its crowdsourced dataset gives it an advantage over rivals that build maps primarily using their own sensor fleets1
.While Niantic maintains that landmark scans were always optional, with players choosing to submit short video scans of specific public landmarks in exchange for items and rare Pokémon through features like "Field Research" added in 2020, critics argue many players didn't fully understand how their data would be used
4
. "143 million people thought they were catching Pokémon," one user wrote on social media. "They were actually building one of the largest real-world visual datasets in AI history"5
.
Source: Decrypt
A Niantic spokesperson emphasized that "players could choose to submit anonymized scans of public places to help improve VPS. This scanning was and remains entirely optional," adding that scans are not connected to player accounts
5
. The company also noted that its initial VPS incorporated optional player scans, but increasingly the data from enterprise customers like Coco Robotics drives accuracy in environments that matter most for commercial applications5
.For last-mile delivery operations and autonomous robotics companies struggling with navigation in complex urban environments, Niantic Spatial's technology represents a potential breakthrough. The question now is whether other robotics firms will adopt this spatial AI approach, and how regulations around data collection and usage will evolve as AI models increasingly rely on crowdsourced information gathered through consumer applications.
Summarized by
Navi
[1]
1
Technology

2
Science and Research

3
Science and Research
