Curated by THEOUTPOST
On Fri, 4 Oct, 12:05 AM UTC
2 Sources
[1]
3 ways visual search helps you shop
Google Lens helps people search what they see -- with just a quick photo you're scrolling through visual matches for what caught your eye. Lens is used for nearly 20 billion visual searches every month -- we see shoppers relying on visual search for everything from exploring outfit inspo and living room decor ideas to identifying an item sitting in the background of a video. In fact, today 20 percent of all Google Lens searches are shopping-related. Here are a few ways we're seeing shoppers get the most out of Lens -- plus a handy new update that makes product research with Lens as easy as a snap(shot). In those moments of "What is that? I want it!" Lens merges the power of Google's AI and more than 45 billion products in the Shopping Graph to not only help you discover visual matches of what you love, but also identify specific products and details about them before you buy. Starting today, Lens will prominently display key information when it identifies the product in your photo. Let's say you're at the airport and a backpack someone is wearing catches your eye -- you love how it looks and want to learn more about it. Rather than trying to figure out the exact product name and typing it into Search, you can just tap the Lens icon in the search bar and snap a photo (or upload one from your gallery) to instantly see details like price across retailers, current deals, product reviews and where to buy it -- all in one place, powered by the Shopping Graph.
[2]
Google's Visual Search Can Now Answer Even More Complex Questions
Launched in 2017, Google Lens processes 20 billion visual searches a month. Now it will work with video and voice, too. When Google Lens was introduced in 2017, the search feature accomplished a feat that not too long ago would have seemed like the stuff of science fiction: Point your phone's camera at an object and Google Lens can identify it, show some context, maybe even let you buy it. It was a new way of searching, one that didn't involve awkwardly typing out descriptions of things you were seeing in front of you. Lens also demonstrated how Google planned to use its machine learning and AI tools to ensure its search engine shows up on every possible surface. As Google increasingly uses its foundational generative AI models to generate summaries of information in response to text searches, Google Lens' visual search has been evolving, too. And now the company says Lens, which powers around 20 billion searches per month, is going to support even more ways to search, including video and multimodal searches. Another tweak to Lens means even more context for shopping will show up in results. Shopping is, unsurprisingly, one of the key use cases for Lens; Amazon and Pinterest also have visual search tools designed to fuel more buying. Search for your friend's sneakers in the old Google Lens, and you might have been shown a carousel of similar items. In the updated version of Lens, Google says it will show more direct links for purchasing, customer reviews, publisher reviews, and comparative shopping tools. Lens search is now multimodal, a hot word in AI these days, which means people can now search with a combination of video, images, and voice inputs. Instead of pointing their smartphone camera at an object, tapping the focus point on the screen, and waiting for the Lens app to drum up results, users can point the lens and use voice commands at the same time, for example, "What kind of clouds are those?" or "What brand of sneakers are those and where can I buy them?" Lens will also start working over real-time video capture, taking the tool a step beyond identifying objects in still images. If you have a broken record player or see a flashing light on a malfunctioning appliance at home, you could snap a quick video through Lens and, through a generative AI overview, see tips on how to repair the item. First announced at I/O, this feature is considered experimental and is available only to people who have opted into Google's search labs, says Rajan Patel, an 18-year Googler and a cofounder of Lens. The other Google Lens features, voice mode and expanded shopping, are rolling out more broadly. The "video understanding" feature, as Google calls it, is intriguing for a few reasons. While it currently works with video captured in real time, if or when Google expands it to captured videos, entire repositories of videos -- whether in a person's own camera roll or in a gargantuan database like Google -- could potentially become taggable and overwhelmingly shoppable. The second consideration is that this Lens feature shares some characteristics with Google's Project Astra, which is expected to be available later this year. Astra, like Lens, uses multimodal inputs to interpret the world around you through your phone. As part of an Astra demo this spring, the company showed off a pair of prototype smart glasses. Separately, Meta just made a splash with its long-term vision for our augmented reality future, which involves mere mortals wearing dorky glasses that can smartly interpret the world around them and show them holographic interfaces. Google, of course, already tried to realize this future with Google Glass (which uses fundamentally different technology than that of Meta's latest pitch). Are Lens' new features, coupled with Astra, a natural segue to a new kind of smart glasses?
Share
Share
Copy Link
Google enhances its Lens visual search tool with multimodal capabilities, including video and voice inputs, while improving shopping features and processing 20 billion visual searches monthly.
Google Lens, the AI-powered visual search tool launched in 2017, has become an integral part of the search giant's ecosystem, processing a staggering 20 billion visual searches every month 12. This innovative technology allows users to search for information about objects they see in the real world simply by pointing their smartphone camera at them.
In a significant update, Google has announced that Lens is evolving to support multimodal searches, combining video, images, and voice inputs 2. This enhancement allows users to interact with Lens in more natural and intuitive ways. For example, users can now point their camera at an object and simultaneously use voice commands like "What kind of clouds are those?" or "What brand of sneakers are those and where can I buy them?"
Google is also introducing an experimental feature called "video understanding" 2. This capability enables Lens to work with real-time video capture, taking the tool beyond static image identification. Users can potentially use this feature to troubleshoot malfunctioning appliances or seek repair tips by capturing a quick video through Lens.
With 20% of all Google Lens searches being shopping-related, the company is doubling down on improving the shopping experience 1. The latest update prominently displays key product information when Lens identifies an item in a photo. Users can now see details such as price comparisons across retailers, current deals, product reviews, and purchase options – all powered by Google's Shopping Graph, which contains over 45 billion products 1.
Google Lens leverages the company's advanced machine learning and AI tools to provide rich context and information about identified objects 2. This aligns with Google's broader strategy of using generative AI models to enhance search results and provide more comprehensive summaries of information.
The evolution of Google Lens, particularly its video understanding feature, opens up intriguing possibilities for the future. There's potential for making vast video repositories taggable and shoppable 2. Additionally, the multimodal capabilities of Lens share similarities with Google's Project Astra, hinting at possible applications in future augmented reality devices 2.
As Google Lens continues to evolve, it's reshaping how people interact with the world around them and how they shop. The tool's ability to seamlessly blend visual, audio, and now video inputs is pushing the boundaries of search technology and creating new opportunities for e-commerce integration 12. With major players like Amazon and Pinterest also investing in visual search tools, this technology is poised to play an increasingly important role in the future of online shopping and information discovery.
Reference
[1]
Google has rolled out significant updates to its Lens app, including voice-activated video search capabilities and improved shopping features, leveraging AI technology to enhance user experience and product information retrieval.
13 Sources
13 Sources
Google has launched a new AI-powered feature for Google Lens, transforming it into an in-store shopping assistant. The update provides real-time product information, price comparisons, and customer reviews to help users make informed purchasing decisions.
11 Sources
11 Sources
Google enhances Lens with AI Overviews, making visual searches more informative without follow-up questions. The feature is expanding across Android and iOS platforms, with new shortcuts in Chrome and the Google app.
2 Sources
2 Sources
Google announces significant AI upgrades to its search engine, enabling voice-activated queries about images and videos, and introducing AI-organized search results. This move aims to simplify search and attract younger users, despite past challenges with AI-generated misinformation.
17 Sources
17 Sources
Google introduces new AI-driven tools for its Shopping platform, including virtual try-ons for clothing and makeup, and an image generation feature to help users find desired products.
10 Sources
10 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved