Curated by THEOUTPOST
On Tue, 8 Apr, 12:03 AM UTC
23 Sources
[1]
Google's AI Mode search can now answer questions about images
Google started cramming AI features into search in 2024, but last month marked an escalation. With the release of AI Mode, Google previewed a future in which searching the web does not return a list of 10 blue links. Google says it's getting positive feedback on AI Mode from users, so it's forging ahead by adding multimodal functionality to its robotic results. AI Mode relies on a custom version of the Gemini large language model (LLM) to produce results. Google confirms that this model now supports multimodal input, which means you can now show images to AI Mode when conducting a search. As this change rolls out, the search bar in AI Mode will gain a new button that lets you snap a photo or upload an image. The updated Gemini model can interpret the content of images, but it gets a little help from Google Lens. Google notes that Lens can identify specific objects in the images you upload, passing that context along so AI Mode can make multiple sub-queries, known as a "fan-out technique." Google illustrates how this could work in the example below. The user shows AI Mode a few books, asking questions about similar titles. Lens identifies each individual title, allowing AI Mode to incorporate the specifics of the books into its response. This is key to the model's ability to suggest similar books and make suggestions based on the user's follow-up question. Google sees AI Mode as a key way to maintain its role as the primary Internet directory. As the company has explained in the past, many people are using traditional search to find answers to specific questions. For those people, AI Mode can be a quicker and more effective way to find what they need. Google now says that its early telemetry from AI Mode shows that people are putting about twice as much text in their searches compared to traditional web search. Google frames this is a good thing, but that could also indicate users feel they need to provide more context to the robot. It's likely you still haven't seen AI Mode, even though it has been available for weeks. Google launched this feature exclusively for Google One AI Premium subscribers, and it needed to be enabled in Google Labs. Now, Google says it's expanding AI Mode to "millions more Labs users in the US" who aren't paying for AI features. While you must still opt-in, the day might be coming when AI Mode is an option for all searches. And perhaps sometime after that, it'll be the default way Google wants you to search the web.
[2]
Google's AI Mode now lets users ask complex questions about images | TechCrunch
Google is bringing multimodal search to AI Mode, its Google Search experiment that lets users ask complex, multi-part questions and follow-ups to dig deeper on a topic. Users who have access to AI Mode can now tap the feature to ask questions about photos they've uploaded or taken with their camera. The new image-analyzing functionality in AI Mode is powered by Google Lens' multimodal capabilities, Google said in a blog post on Monday. AI Mode can understand the entire scene in an image, including how objects relate to each other, as well as their materials, colors, shapes, and arrangement, according to Google. Using a technique called "query fan-out," AI Mode asks multiple questions about both the image and the objects shown in it, providing more detailed information than a traditional Google search. For example, you could snap a photo of your bookshelf and enter the query: "If I enjoyed these, what are some similar books that are highly rated?" AI Mode will identify each book and then provide a list of recommended books with links to learn more about and/or purchase them. AI Mode also lets you ask follow-up questions to narrow down your search, such as "I'm looking for a quick read, which one of these recommendations is the shortest?" As part of Monday's announcement, Google said it's making AI Mode available to millions more users who are enrolled in Labs, Google's home for experimental features and products. Prior to this, AI Mode was only available to Google One AI Premium subscribers. Launched last month, AI Mode looks to take on popular services like Perplexity and OpenAI's ChatGPT Search. Google has said that it plans to continue to refine the user experience and expand functionality in the feature.
[3]
Google's AI Mode can now see and search with images
Jess Weatherbed is a news writer focused on creative industries, computing, and internet culture. Jess started her career at TechRadar, covering news and hardware reviews. Google is adding multimodal capabilities to its search-centric AI Mode chatbot that enable it to "see" and answer questions about images, as it expands access to AI Mode to "millions more" users. The update combines a custom version of Gemini AI with the company's Lens image recognition tech, allowing AI Mode Search users to take or upload a picture and receive a "rich, comprehensive response with links" about its contents. The multimodal update for AI Mode is available starting today and can be accessed in the Google app on Android and iOS. "AI Mode builds on our years of work on visual search and takes it a step further," says Robby Stein, VP of product for Google Search. "With Gemini's multimodal capabilities, AI Mode can understand the entire scene in an image, including the context of how objects relate to one another and their unique materials, colors, shapes, and arrangements." Google says the update uses a "fan-out technique" that issues multiple queries about the image it sees, and any objects within it, to provide responses that are "incredibly nuanced and contextually relevant." That allows it to do things like identify books that are displayed within an image, issue suggestions for similar titles with positive ratings, and answer questions to further curate recommendations. AI Mode for Search serves as Google's answer to Perplexity and ChatGPT Search, a chatbot-like experience that responds to inquiries with AI-generated summaries pulled from everything in Google's search index. AI Mode launched exclusively for Google One AI Premium subscribers last month, though only within Labs. Now, Google says it has started to make AI Mode available to "millions more" Labs users in the US, beyond just paying AI Premium subscribers.
[4]
Google Search's AI mode goes multimodal
Google's take on an AI search engine just got much better, and it's still free. Google has dominated the search engine market for decades. However, the rise of generative AI has presented a new form of competition -- AI search engines. With its AI Mode, Google provides users with exactly that: an AI chatbot that understands and outputs conversational search queries. As you might have expected, it looks a lot less like a traditional Google search and a lot more like ChatGPT Search. Also: Microsoft is offering free AI skills training for everyone - how to sign up Google's AI Mode was met with such high interest that the feature went from being exclusive to Google One AI Premium users who pay $20 monthly to any Google account user via Google Labs. On Monday, Google updated it further, giving it multimodal capabilities and expanding access to millions of more Lab users. Now, leveraging Google Lens, the feature that allows users to upload and learn more about images on Google Search, users can snap or upload a photo into AI Mode. By using Lens in AI Mode, users get to experience a much more in-depth visual search experience than with traditional search because AI mode can understand the entire scene of an image. Google explains that this means Lens can understand specifics about an image, such as how objects relate to each other and characteristics of the image, such as the shapes, colors, and materials of the objects. Then, when a user issues a query, the AI Mode is able to asses both the image as a whole and the objects within it, providing users with more information and more contextually accurate responses. Google's blog post gave the example of a user taking a photo of their bookshelf and asking for recommendations based on the books pictured. AI Mode was able to identify each book, then run a Search and output recommended books in a conversational response organized by the theme of the books. The feature is still accessible via Google Labs, which is free to access. All you have to do is visit the site, log in with your Google account, and join the waitlist. You'll be alerted by email when you are taken off. Once you have access, you can try out this new capability in AI Mode in the Google app for Android and iOS. Also: Microsoft unveils 9 new Copilot features - you can try some now This feature differs from Google's AI Overviews and Gemini chatbot by combining real-time web-based answers with a conversational AI interface, offering the accuracy of a search engine and the interactivity of a chatbot. Its biggest competitors include OpenAI's ChatGPT Search or Perplexity AI. There are also many other AI Search engines on the market that suit different needs, and you can find ZDNET's roundup here.
[5]
Google Search just got an AI upgrade that you might actually find useful - and it's free
Google's take on an AI search engine just got a multimodal improvement, allowing you to pull results of what you see. Google has dominated the search engine market for decades. However, the rise of generative AI has presented a new form of competition -- AI search engines. With its AI Mode, Google provides users with exactly that: an AI chatbot that understands and outputs conversational search queries. As you might expect, it looks less like a traditional Google search and more like ChatGPT Search. Also: Microsoft is offering free AI skills training for everyone - how to sign up Google's AI Mode was met with such high interest that the feature went from being exclusive to Google One AI Premium users who pay $20 monthly to any Google account user via Google Labs. On Monday, Google updated it further, giving the AI multimodal capabilities and expanding access to millions more Lab users. The AI now leverages Google Lens, the feature that allows users to upload and learn more about images on Google Search. By using Lens in AI Mode, users experience a much more in-depth visual search experience than traditional search because AI mode can understand the entire scene of an image. According to Google, Lens can now understand specifics about an image, such as how objects relate to each other and characteristics of the image, such as shapes, colors, and materials of the objects. When a user issues a query, the AI Mode is able to assess both the image as a whole and the objects within it, providing users with more information and more contextually accurate responses. Google's blog post gave the example of a user taking a photo of their bookshelf and asking for recommendations based on the books pictured. AI Mode was able to identify each book, then run a Search and output recommended books in a conversational response organized by the theme of the books. The feature is still accessible via Google Labs, which is free to access. All you have to do is visit the site, log in with your Google account, and join the waitlist. You'll be alerted by email when your access begins. Once you have access, you can try out this new capability in AI Mode in the Google app for Android and iOS. Also: Microsoft unveils 9 new Copilot features - you can try some now This feature differs from Google's AI Overviews and Gemini chatbot by combining real-time web-based answers with a conversational AI interface, offering the accuracy of a search engine and the interactivity of a chatbot. Its biggest competitors include OpenAI's ChatGPT Search or Perplexity AI. There are also many other AI Search engines on the market that suit different needs, and you can find ZDNET's roundup here.
[6]
Google Expands Its Conversational 'AI Mode' Search Option
AI Mode is Google's attempt to compete with other AI chatbots that have added web-search capabilities. It's also a step beyond the AI Overview search results that Google introduced last May (and which promptly began serving up wildly inaccurate results sourced from places like sarcastic Reddit posts), because AI Mode operates in a conversational mode. For example, my query, "What is a good native groundcover to plant in shady ground in Northern Virginia?" yielded a list of plants native to the region. Following up with "Where can I buy these plants near DC?" got me a list of nurseries around here, some specializing in native plants and others that include them among their other plants for sale. Both sets of answers linked out to sources, but checking the accuracy of those results requires an extra step. AI Mode displays a link icon at the end of each data point in its results that, when clicked, shows a preview of the source page at the top right of the page. A second click will take you to that page. Google's documentation specifies that you also have to be using the latest version of its Chrome browser or Google's eponymous mobile app. But I had no problem using AI Mode in the latest version of Apple's Safari on a Mac mini. (There, I asked "How far can the stock market fall from Trump's tariffs," which yielded a non-reassuring answer that correctly cited Citigroup research findings of an average 22.1% drop in the S&P 500 stock index during post-1948 recessions.) Also today, Google announced that it's adding AI Mode to the Lens feature of its Google app for Android and iOS, leveraging the company's Gemini AI to discern the entirety of a scene and how things in it relate to each other. A video provided by Google PR shows a user pointing their phone at a set of books on a bookshelf that offer advice on living and thinking better, asking for reading recommendations along those lines and then asking for titles that would be a faster read. AI Mode's pick was Brene Brown's Daring Greatly: How the Courage to Be Vulnerable Transforms the Way We Live, Love, Parent, and Lead, which from that subtitle alone seems like a timely read in this economy.
[7]
Google AI Mode rolls out to more testers with new image search feature
Google is bringing AI Mode to more people in the US. The company announced on Monday it would make the new search tool, first launched at the start of last month, to millions of more Labs users across the country. For uninitiated, AI Mode is a new dedicated tab within Search. It's essentially Google's answer to ChatGPT Search. It allows you to ask more complicated questions of Google, with a custom version of Gemini 2.0 doing the legwork to deliver a nuanced AI-generated response. Labs, meanwhile, is a beta program you can enroll your Google account in to gain access to new Search features before the company rolls them out to the public. In addition to bringing AI Mode to more people, Google is unlocking the tool's multimodal capabilities. Starting today, you can snap and upload images to AI Mode, allowing you to ask questions about what you see. The feature brings together AI Mode with Google's Lens technology. "With Gemini's multimodal capabilities, AI Mode can understand the entire scene in an image, including the context of how objects relate to one another and their unique materials, colors, shapes and arrangements," Google explains. "Using our query fan-out technique, AI Mode then issues multiple queries about the image as a whole and the objects within the image, accessing more breadth and depth of information than a traditional search on Google." AI Mode's new Lens integration is available through the Google app on Android and iOS.
[8]
This new AI Mode feature is like Google Lens on steroids
You can now use AI Mode in Search to ask questions about images, as Google has upgraded the feature with the powerful multimodal capabilities in Lens. The company announced this change in a recent blog post, highlighting that AI Mode can now "understand the entire scene in an image, including the context of how objects relate to one another and their unique materials, colors, shapes and arrangements." Thanks to this new visual search feature, you can now use AI Mode to ask complex questions about the image as a whole or the objects within the image. For example, you can use it to capture an image of books and then ask for similar recommendations. AI Mode will identify each book in the frame, issue multiple queries to learn about the books and deliver a list of recommended titles with links to help you learn more and make a purchase.
[9]
AI Mode takes the spotlight in the Google app's latest revamp
The app is now experimenting with prominent AI Mode placement, right in that top bar. Google Search has changed a lot over the years, and while in the past that's involved a lot of subtle refinement to the sort of results a given query would generate, over the past year or so we've seen Google make some of the most controversial and sweeping changes to Search yet. New AI-powered results have the potential to streamline access to information -- assuming that AI can get all the details right. The latest incarnation is Google's AI Mode, and it's already pretty impressive. If you haven't yet, you might be giving it a try soon, as Google starts experimenting with giving AI Mode some prominent placement in its Android app.
[10]
Google's AI Mode gains popular camera search as rollout expands to more users
Summary Lens' multimodal capabilities are now integrated into Google's AI Mode, letting users interact with images and text seamlessly within conversations. AI Mode now delivers more "nuanced" and "contextually relevant" responses in the Google app on Android and iOS. Google is expanding AI Mode to millions more Labs users in the US, moving beyond just Google One AI Premium subscribers. The advent of generative AI in Google Search didn't receive the kind of response Google would've liked, as many alleged that it's turned the search experience worse. However, with the introduction of Gemini 2.0 in AI overviews, Google seems to have ironed out some issues, with the company claiming that Gemini 2-powered AI search can now produce higher-quality responses. Gemini 2.0 is also at the heart of Google Search's experimental "AI Mode" capability, released last month for Labs users in the US, bringing a more chatbot-like experience where you can ask questions and get detailed answers with images and links to original sources. Now, on top of it, you can enjoy the benefit of Lens' multimodal capabilities in AI Mode, which is also more accessible for users in the US. Related 5 simple ways to supercharge your Android with Google Gemini The Google Assistant killer? Posts 10 What do Lens' multimodal capabilities in AI Mode mean for you? "Multimodal", in simple words, means multiple types of data. For example, on Google Lens, you can use your phone's camera to capture what you see around you or upload images stored locally to ask questions related to them and get solutions in both visual and text forms. For example, depending on the prompt. It can recognize the text in an image and translate it to a different language, help you complete homework and solve a complicated math problem, or even help you buy items similar to those in the image by giving you a list of purchasing links. Now that these powerful multimodal capabilities are expanding to AI Mode, you don't have to switch to the Google app's home page to access Lens in the middle of an important conversation in AI Mode to quickly understand things like how objects in the image relate to one another, how they're arranged, where you can buy similar ones, and more. As per Google, this makes search in AI Mode "nuanced" and "contextually relevant" in the Google app on both Android and iOS. Google is rolling out AI Mode to more people in the US AI Mode was accessible to Google One AI premium users in the US at launch, then it expanded to free users a few days later, but only for people who joined the waitlist in Google Labs. If you're one but haven't got access to it, it's worth trying now, as Google has taken more people off the waitlist to expand the service to "millions more Labs users" in the US. The AI Mode option appears on the left side of the All tab in the Google app and on the web, and that's the only way to access the tool for now. In other words, it also means you need to come back to the home screen again if you want to start afresh in AI Mode. This is undeniably not the most convenient approach, but Google has already started working to fix this issue by introducing a shortcut button in the AI Mode interface to quickly start a conversation. Like the Lens support, the AI Mode shortcut button is also designed to save you from hassle and be more efficient in what you're doing.
[11]
Google could move the AI Mode icon front and center
Gboard's AI meme generator aims to take over humanity's most sacred online contributions Summary Google is testing a new, more prominent placement for the AI Mode shortcut, moving it into the main search bar on Android by replacing the voice search and Google Lens icons. This change aims to make the Gemini 2.0-powered AI Mode, which offers enhanced search capabilities, more easily accessible to users who have opted into the experimental feature via Search Labs. Google might also be exploring adding an AI Mode widget shortcut for Android, similar to the new shortcut on iOS, to provide another quick way to launch the advanced search mode; AI Mode is currently available to users in the US through Search Labs. Google's March-revealed experimental Search Labs feature, AI Mode, is essentially a new way to search online that expands on regular AI Overviews with more in-depth reasoning, thinking, and multimodal capabilities. Powered by a custom Gemini 2.0 version, accessing the new mode, up until now, has been limited to an "AI Mode" chip below the Search bar on the web, and an AI Mode icon below the Search bar on the home screen on mobile. Related Google's AI Mode gains popular camera search as rollout expands to more users Google AI Mode unlocks new senses Posts Now, soon after Google began porting over Lens' capabilities to AI Mode, the tech giant seems to be working on a simpler way for users to trigger the Gemini 2.0-powered search mode -- one that replaces the familiar voice search and Google Lens icons within the search bar. Spotted by the folks over at 9to5Google, the change hasn't rolled out widely just yet. While the tweak doesn't significantly alter the Search experience, relocating AI Mode to a more intuitive position (within the search bar) is a smart move to promote the feature's usage. It can't act as a double-edged sword, considering that the feature needs to be manually enabled via Search Labs -- so users that aren't interested in AI Mode wouldn't see the UI tweak anyway. Voice search and Lens relocated Close Old (left), new (right) For users that have manually enabled AI Mode, voice search and Lens icons will now be situated right below the search bar, as seen in the second screenshot above. Once tapped, users will directly begin typing their questions into the "Ask AI Mode" field. Elsewhere, Android users might soon be able to add AI Mode as a Search widget shortcut. For reference, the functionality has already begun making its way to iOS, allowing users to hop right into the Gemini 2.0-powered search mode from their iPhones' home screen. When the functionality might land on Android is currently unclear. AI Mode is currently exclusive to users in the US. Upon launch, the new search experience was locked behind a paid Google One AI Premium, but Google subsequently expanded access to all. To try out AI Mode, you'll need to opt-in via Search Labs. Mobile Open the Google app on your smartphone. Tap the Labs button (flask icon) on the top-left. Navigate to the AI experiments section. Tap Turn on under AI Mode. Desktop Head to the Google Labs website. Under New experiments/AI experiments, turn on AI Mode.
[12]
Google AI Mode adding multimodal Google Lens search
AI Mode entered testing a month ago and Google is now expanding access to free Labs users, as well as adding Lens visual search. At launch, it was just available for Google One AI Premium subscribers. As we previously spotted, Google confirmed today that it's coming to "millions more Labs users in the U.S." who are not paying. Meanwhile, AI Mode is adding multimodal input and understanding that lets you take a new photo with Google Lens or upload an existing image. This lets you "easily ask complex questions about what you see." Behind-the-scenes, Google is leveraging Gemini's multimodal capabilities to "understand the entire scene in an image, including the context of how objects relate to one another and their unique materials, colors, shapes and arrangements." Lens precisely identifies each object in the image. Using our query fan-out technique, AI Mode then issues multiple queries about the image as a whole and the objects within the image, accessing more breadth and depth of information than a traditional search on Google. The result is a response that's incredibly nuanced and contextually relevant, so you take the next step. In the example above, AI Mode "precisely identifies each book on the shelf and issues queries to learn about the books and similar recommendations that are highly rated." The end result is a "list of recommended books with links to learn more and purchase," while you can ask follow-up questions. Rolling out starting today, Google Lens in AI Mode is available on Android and iOS. Go to the AI Mode homepage for a new Lens icon in the bottom search field. This takes you to the usual Google Lens UI. As users long-press on the shutter button, they can speak their query. Google today also shared some usage patterns after a month of public testing. People say they like the "clean design, fast response time and ability to understand complex and nuanced questions." AI Mode queries are said to be "twice as long as traditional Search queries on Google" (on average). It's being used for exploratory, open-ended questions, and "more complicated tasks -- like comparing two products, exploring how-tos and planning a trip."
[13]
Google tests new Search bar to access AI Mode on Android
As AI Mode testing continues, Google is trialing a new way to launch the generative experience from the Search bar on Android. Since launch, you've been able to start a new AI Mode chat from the shortcut underneath the Search bar at the top of the Google app. It lives alongside the buttons (Screenshots, Translate, and Homework filters) that launch Google Lens and Song Search. Google is now testing a prominent AI Mode circle directly in the Search field. Replacing the voice and Lens icons at the right, a tap lets you immediately start typing. The "Ask AI Mode" field last week gained Google Lens integration with the ability to upload from your gallery or take a new picture. This design replaces the colorful carousel with three shortcuts to start voice search, launch Google Lens, or jump directly to your gallery for a visual lookup. Right now, we're only seeing this design on one device. Free users in the US can now sign-up for AI Mode via Search Labs. On iOS, you can now launch AI Mode from the customizable homescreen widget. That is presumably coming to the Android version, which recently picked up two new shortcuts for quick access to News and Saved (in light of Activity replacing it as a tab in the bottom bar).
[14]
Google just supercharged search with AI Mode and Lens integration -- what you need to know
In a significant step towards a more intuitive and visually driven search experience, Google is expanding access to its experimental AI Mode and equipping it with the power of Google Lens. These developments aim to provide users with a more comprehensive search experience, almost like a 'deep research' element within search so users get an even more intuitive and personal response -- fast. Initially available to Google One AI Premium subscribers, AI Mode is now being rolled out to millions of Labs users across the United States. This expansion allows a broader audience to benefit from AI Mode's ability to handle complex and nuanced queries. Users can engage in more exploratory and open-ended searches, such as comparing products, seeking how-to guidance, or planning trips. In addition to wider accessibility, Google is enhancing AI Mode by incorporating the multimodal capabilities of Google Lens. Users can now input queries using images either by uploading photos or utilizing their device's camera -- and receive AI-powered responses. This integration leverages a custom version of Google's Gemini model, enabling AI Mode to comprehend entire scenes within images, aiming to understand not just what's in an image, but the context surrounding it -- how objects relate, their materials, colors, and arrangements. According to Google, it uses a sophisticated "query fan-out" technique. In essence, when you provide an image, AI Mode intelligently generates multiple underlying queries about the scene and the objects within it, pulling information from a wider swath of the web than a single, simple search might capture. The goal? To deliver nuanced, context-aware answers with helpful links to explore further. Google has found that queries in AI Mode are, on average, twice as long as traditional Google searches. This suggests users are already turning to it for more complex and exploratory tasks -- comparing products in detail, mapping out how-to projects, or planning multi-faceted trips, rather than just quick fact lookups. In other words, once users are provided with a response, they are often inspired to dig deeper and learn more about the topic. While still technically an experiment within Labs, this move clearly signals Google's direction: weaving advanced AI more deeply into the fabric of search, making it more conversational, contextual, and now, capable of understanding the visual world around us. As millions more users gain access, Google will be gathering feedback to further refine this glimpse into the future of finding information. Users are being encouraged to explore these new features through the Google app on Android and iOS devices and share their experiences to aid in further enhancements. Have you tried it? Let us know what you think in the comments.
[15]
Google's AI Mode can explain what you're seeing even if you can't
Google is adding a new dimension to its experimental AI Mode by connecting Google Lens's visual abilities with Gemini. AI Mode is a part of Google Search that can break down complex topics, compare options, and suggest follow-ups. Now, that search includes uploaded images and photos taken on your smartphone. The result is a way to search through images the way you would text but with much more complex and detailed answers than just putting a picture into reverse image search. You can literally snap a photo of a weird-looking kitchen tool and ask, "What is this, and how do I use it?" and get a helpful answer, complete with shopping links and YouTube demos. If you take a picture of a bookshelf, a plate of food, or the chaotic interior of your junk drawer, the AI won't just recognize individual objects; it will also explain their relationship to each other. You might get a suggestion of other dishes you can make with the same ingredients, whether your old phone charger is in the drawer or what order you should read those books on the shelf. You can see how it works above. Essenitally, the feature fires off multiple related questions in the background about the entire scene and each individual object. So when you upload a picture of your living room and ask how to redecorate it, you're not just getting one generic answer. You're getting a group of responses from mini AI agents asking about everything in the room. Google isn't unique in this pursuit. ChatGPT includes image recognition, for instance. However, Google's advantage is decades of search data, visual indexing, and other data storage and organization. If you're a Google One AI Premium subscriber or are approved to test it through Search Labs, you can test out the feature on the Google mobile app.
[16]
Google's New AI Mode Will Answer 'Nuanced' Questions About Your Photos
Google's new AI mode can now answer complex questions about a user's photos. The new image-analyzing functionality is powered by Google Lens' multimodal capabilities, the company says. This new multimodal understanding in AI mode allows users to take a photo or upload one, and ask questions about it to receive a "rich, comprehensive response with links to dive deeper." The function is powered by Google Lens and a custom version of its AI image program Gemini. Together, Google says this new AI Mode can "understand the entire scene in an image, including the context of how objects relate to one another and their unique materials, colors, shapes, and arrangements." The system uses a "fan-out technique", that queries objects it sees in the scene so it can provide a response "that's incredibly nuanced and contextually relevant." "AI Mode builds on our years of work on visual search and takes it a step further," Robby Stein, VP of product for Google Search, tells The Verge. "With Gemini's multimodal capabilities, AI Mode can understand the entire scene in an image, including the context of how objects relate to one another and their unique materials, colors, shapes, and arrangements." In the video above, Google gives the example of books on a shelf and the user wants to find similar recommendations that are highly rated. AI Mode provides that list as well as links where the users can learn more and purchase. The user can also ask it follow-up questions. AI Mode was previously exclusive to Google One AI Premium subscribers but after the image-analyzing functionality started rolling out yesterday (Monday), millions more can now access it via the Google app on iOS and Android. However, you must be enrolled in Labs, Google's experimental platform. Google has been adding AI features to its search engine since 2024, but AI Mode is one step further. The search giant believes that AI Mode is key to maintaining its monopoly as the first port of call for people who want to search the internet.
[17]
Bringing multimodal search to AI Mode
Since launching AI Mode to Google One AI Premium subscribers, we've heard incredibly positive feedback from early users about its clean design, fast response time and ability to understand complex and nuanced questions. On average, AI Mode queries are twice as long as traditional Search queries on Google. People are using AI Mode for help with exploratory and open-ended questions, along with more complicated tasks -- like comparing two products, exploring how-tos and planning a trip. With this positive feedback, we've now started to make AI Mode available to millions more Labs users in the U.S. We're also continuing to improve the experience and today we're bringing the powerful multimodal capabilities in Lens to AI Mode. With AI Mode's new multimodal understanding, you can snap a photo or upload an image, ask a question about it and get a rich, comprehensive response with links to dive deeper. This experience brings together powerful visual search capabilities in Lens with a custom version of Gemini, so you can easily ask complex questions about what you see. AI Mode builds on our years of work on visual search and takes it a step further. With Gemini's multimodal capabilities, AI Mode can understand the entire scene in an image, including the context of how objects relate to one another and their unique materials, colors, shapes and arrangements. Drawing on our deep visual search expertise, Lens precisely identifies each object in the image. Using our query fan-out technique, AI Mode then issues multiple queries about the image as a whole and the objects within the image, accessing more breadth and depth of information than a traditional search on Google. The result is a response that's incredibly nuanced and contextually relevant, so you take the next step.
[18]
Google's AI Mode Just Got a Major Upgrade
We may earn a commission when you click links to retailers and purchase goods. More info. Google first introduced us to AI Mode in Google Search back in March as a way for heavy Search users to ask the "toughest" questions, continue with follow-ups, and see helpful web links in return. It was launched initially as a Labs feature for those who subscribed to Google One AI Premium. Today, more users will see access, plus they are adding Google Lens support to bring visual searches into the equation. On an AI Mode front, Google says today that it "heard incredibly positive feedback" from early users and will expand to millions more Labs users in the US. To see if you have access, you'll hit this link to see all of the Search Labs available to your account. Now, for the Google Lens piece of this, Google is adding a Lens option to AI Mode in the Google App on Android and iOS. Once you have access, you'll be able to snap a photo or upload an image, ask a question about it, and then see a "rich, comprehensive response" with some links to help you find out even more info. Google says that Lens in this situation can understand an entire scene, "including the context of how objects relate to one another and their unique materials, colors, shapes and arrangements." What does that really mean for you? Well, Google offers an example of you pointing Google Lens at a bookshelf with a bunch of books you've read and then asking it to find similar books that are also highly rated. Assuming it works, you would see search results with recommended books (or lists of them), descriptions or summaries of book options, links to websites that offer their own recommendations based on your books, and an option to follow-up or be more specific from those results. We'll have to give it a try once we have access. To get access, again, you'll have to have AI Mode turned on in Labs here.
[19]
AI Mode in Google Search gets new visual search capabilities - SiliconANGLE
AI Mode in Google Search gets new visual search capabilities Google LLC is updating the new "AI Mode" feature in Google Search, introducing multimodal capabilities that allow it to "see" images uploaded by users, so it can better answer their questions. AI Mode was introduced in a limited preview last month for Google One AI Premium subscribers. It's an experimental feature in Google Search that users generative artificial intelligence and allows users to ask complex, multi-part questions and follow-up queries to dig deeper into a specific topic. With today's update, those who have access to AI Mode can now upload images and ask questions about what it sees, Google revealed in a blog post today. In addition, the AI Mode feature is being rolled out to millions of new users who have enrolled in its Labs program to get early access to new applications. Google said AI Mode's image analyzing functionality is powered by the multimodal search capabilities in Google Lens, which is a smartphone app that allows users to take photos with their camera and search them within Google Search. According to Google, AI Mode will be able to understand the entire scene in any uploaded image, including how different objects within it relate to each other. It will also be able to ascertain the materials those objects are made out of, their shapes and colors, and their arrangement, Google said. It will ask multiple questions about the image and the objects within it, allowing it to provide a more detailed response than traditional Google Search does. As an example, Google said someone could snap a photo of their bookshelf and enter the query, "If I enjoyed these, what are some similar books I might like?". AI Mode will scan the image to identify each book, and then it will recommend a bunch of other books after doing some research on those titles. In addition, users can ask follow-up questions. So the user might stress that he or she is "looking for a quick read, which one of these recommendations is the shortest?". Google Search Vice President of Product Robby Stein stressed that: "AI Mode builds on years of work in the area of visual search and takes it a step further." Given that AI Mode is still an experimental feature, it's not clear how popular the service has become. Google launched it last month as a response to popular generative AI search applications such as Perplexity and OpenAI's ChatGPT Search, which provide similar capabilities. Google has said it will continue to refine the user experience and expand the functionality of AI Mode prior to a more general release.
[20]
Google AI Mode update brings powerful multimodal search - Phandroid
Google AI Mode update is getting serious. Early users already loved its clean look and fast responses, but now Google's pushing it even further. Starting today, millions more Labs users in the US are getting access. And with it, something big: full-blown multimodal search powered by Lens. With this Google AI Mode update, you can snap a photo, ask a question, and get detailed answers instantly. Not basic stuff, either. We're talking about context-rich responses, product comparisons, and deep dive links. Basically, it feels like Google combined Lens with Gemini's brainpower, and it works. Google's calling it "query fan-out," which means AI Mode doesn't just look at your photo -- it picks it apart. Objects, shapes, colors, and even the way they're arranged. Then, it fires off a bunch of smart queries to give you way more than a simple result. Want to ID books on a shelf and get recommendations? Done. Wondering what plant you just photographed? You'll get answers and suggestions, plus places to buy it. The update feels like a natural step after years of building visual search. But here's the best part: AI Mode doesn't just understand your image; it understands your intent. Google claims this makes answers more nuanced and useful than regular search.
[21]
Google Lens Will Now Show Multimodal AI Mode Search Results
Once activated, Google Lens searches will show AI Mode by default Google announced that it is expanding its recently released AI Mode feature to more apps and users on Monday. The feature is now available in Google Lens and comes with multimodal capability. The company also said that the free subscribers of Google Labs will also get access to it. The Mountain View-based tech giant introduced the artificial intelligence (AI) tool in early March as an experiment. It is an expansion to the existing AI Overviews, and offers responses on complex topics, multi-faceted search queries, and search topics that would normally take a user multiple searches to find the information. In a blog post, Google announced the expansion of AI Mode. The tech giant said that it has received positive feedback about the feature from users, especially regarding its response time and detailed nature of responses. Notably, so far, the feature was only available to those US-based Google Labs users who had an active Google One AI Premium subscription. Now, the company has said that the AI Mode will be made "available to millions more Labs users in the US," as well as those who sign up now in Labs. Both Android and iOS users of the Google app can access the feature. While still not available outside of the US, it is expected that Google will expand the feature to more regions in the future. The AI Mode is also being added to Google Lens. The feature will support multimodal input, and when a user clicks an image or uploads one from their camera roll along with a query, the AI feature will share a comprehensive response along with related links to learn more about the topic. The company said the AI Mode in Lens is powered by a custom version of the Gemini model. The post also shared an example of the feature where a user snaps an image of books on a shelf, requesting for recommendation. The Gemini-powered AI Mode was able to identify each book in the image and ran separate searches to learn about them and about books similar to them. After running these queries in the background, it came up with book recommendations, along with links that the user can click to learn more about the individual books. Users can also ask follow-up queries in the same interface to narrow down their choices.
[22]
AI Mode in Google Search is Rolling Out to Free Users
In addition, Google has added visual search to AI mode via Lens, which leverages Gemini 2.0's multimodal capability. Google launched a dedicated 'AI Mode' in Google Search last month, but it was only available to Google One AI Premium subscribers. Now, the search giant is expanding AI Mode in Google Search to free users in the US. To access the feature, head to Google Labs via labs.google.com/search/aimode and turn on AI Mode. In case you are unaware, AI Mode in Google Search is an entirely new tab where you can ask open-ended questions and get detailed answers from a custom version of the Gemini 2.0 model using web search. AI Mode is suitable for queries where users seek explanations and want to ask follow-up questions. Referenced links and citations are also displayed along with the AI-generated answer. Basically, with AI Mode, Google is integrating a Perplexity-like tool into its Search product. Apart from that, Google has added visual search capabilities to AI Mode. It's powered by Google Lens using a custom version of Gemini. You can upload an image or capture a video and ask your question through voice. Google says that AI Mode leverages the multimodal capability of Gemini 2.0 to precisely identify the context and answer visual queries. Note that AI Mode in Google Search is different from AI Overview which is displayed on top of web links in the "All" tab. It's now using the Gemini 2.0 model in the US, but it doesn't allow follow-up questions. Initially, AI Overview was under fire for generating false information, including potentially dangerous information like adding glue to pizza and drinking urine to pass kidney stones. Now, with the improved Gemini 2.0 model under the hood, Google is hoping to enhance AI Mode and AI Overview for more users.
[23]
Google rolls out AI Mode to more users with multimodal search
Google on Monday announced a broader rollout of its AI Mode to millions more Labs users in the U.S., following strong early feedback. The company also introduced multimodal search within AI Mode, expanding on the feature first launched last month. Robby Stein, VP of Product at Google Search, shared that early testers gave "incredibly positive feedback" on AI Mode. Users praised its "clean design," quick performance, and how well it can "understand complex and nuanced questions." Stein noted that AI Mode queries are, on average, twice as long as traditional Google Search queries. People are using it for exploratory and open-ended questions, as well as complex tasks like comparing products, learning how-tos, and planning trips. With its new multimodal understanding, users can now snap a photo or upload an image, ask a question about it, and receive detailed responses -- including links to explore further. This feature merges Google's visual search technology in Lens with a custom version of Gemini, allowing users to ask advanced questions based on what they see. Stein explained that AI Mode builds on years of visual search development, taking it further with Gemini's multimodal technology, enabling it to understand the entire scene in an image -- including object relationships, materials, colors, shapes, and arrangements. AI Mode uses a method called query fan-out, which sends out multiple search queries -- not only about the whole image but also the individual elements within it. This approach helps deliver context-rich and deeper insights than a standard search. Stein illustrated with an example: AI Mode can identify each book on a shelf, generate search queries, and return highly rated recommendations. It then shares links to learn more or buy, with options to ask follow-up questions and refine the results. Stein confirmed that AI Mode is still being tested and improved with input from Labs users. He added: We're continuing to improve the experience, and today we're bringing the powerful multimodal capabilities in Lens to AI Mode. Anyone in the U.S. who is interested can sign up in Labs to try the updated AI Mode in the Google app on Android and iOS.
Share
Share
Copy Link
Google has upgraded its AI Mode search with multimodal capabilities, allowing users to ask complex questions about images. This feature combines Google Lens with a custom version of the Gemini AI model.
Google has taken a significant leap forward in its AI-powered search capabilities by introducing multimodal functionality to its AI Mode search feature. This update allows users to ask complex questions about images, combining the power of Google Lens with a custom version of the Gemini large language model (LLM) 12.
The new multimodal AI Mode enables users to upload images or take photos directly through the search interface. Google Lens identifies specific objects within the images, while the Gemini model interprets the overall content and context. This combination allows AI Mode to understand the entire scene, including how objects relate to each other, their materials, colors, shapes, and arrangements 3.
Google employs a "query fan-out" technique, where the system asks multiple sub-queries about both the image and the objects within it. This approach provides more detailed and contextually relevant information than traditional search methods 24.
Users can now engage in more complex interactions with AI Mode. For example, a user could take a photo of their bookshelf and ask, "If I enjoyed these, what are some similar books that are highly rated?" The system would then identify each book, provide recommendations, and even allow for follow-up questions to refine the search results 25.
Initially launched exclusively for Google One AI Premium subscribers, Google is now expanding access to AI Mode to "millions more Labs users in the US" who aren't paying for AI features 13. This move suggests that Google is confident in the feature's performance and is preparing for a broader rollout.
Google's AI Mode represents a significant shift in how users interact with search engines. The company reports that early telemetry shows users inputting about twice as much text in their searches compared to traditional web search, indicating a more detailed and conversational approach 1.
This development positions Google to compete more effectively with emerging AI-powered search alternatives like Perplexity and OpenAI's ChatGPT Search 34. By combining real-time web-based answers with a conversational AI interface, Google aims to offer both the accuracy of a search engine and the interactivity of a chatbot 5.
As Google continues to refine and expand AI Mode, it's clear that the company sees this as a key strategy to maintain its dominant position in the search market. The integration of multimodal capabilities and the expansion of access suggest that AI-powered, conversational search experiences may become the new norm for internet users in the near future 13.
Reference
[1]
[3]
Google launches an experimental AI Mode in Search, leveraging Gemini 2.0 to provide advanced AI-generated responses and deeper exploration capabilities for complex queries.
39 Sources
39 Sources
Google is broadening access to its AI-powered Search mode beyond premium subscribers, introducing new features and testing interface improvements to enhance user experience.
8 Sources
8 Sources
Google is developing an AI Mode for its search engine, aiming to integrate conversational AI capabilities similar to Gemini. This new feature could transform the search experience on Android devices, allowing for more interactive and context-aware queries.
4 Sources
4 Sources
Google is internally testing an 'AI Mode' for Search, which aims to provide more comprehensive answers to complex queries using advanced AI capabilities.
7 Sources
7 Sources
Google is experimenting with a new 'AI Mode' shortcut in its mobile search app, potentially revolutionizing how users interact with search results through AI-powered responses and conversational features.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved