Curated by THEOUTPOST
On Thu, 13 Mar, 4:02 PM UTC
54 Sources
[1]
Google's Gemini AI can now see your search history
Google is continuing its quest to get more people to use Gemini, and it's doing that by giving away even more AI computing. Today, Google is releasing a raft of improvements for the Gemini 2.0 models, and as part of that upgrade, some of the AI's most advanced features are now available to free users. You'll be able to use the improved Deep Research to get in-depth information on a topic, and Google's newest reasoning model can peruse your search history to improve its understanding of you as a person. What could go wrong? Like most big AI players, Google has a number of different models available. Gemini 2.0 Flash Thinking Experimental is the company's most capable multistep reasoning model, which can consider complex topics and gives you a window into its "thought" process. Google is adding a lot to this model in its latest round of updates, enabling a much larger 1-million-token context window, file uploads, and faster output. It also supports more Google apps with connections to Calendar, Notes, Tasks, and Photos. With the aim of making Gemini more personal to you, Google is also plugging Flash Thinking Experimental into a new source of data: your search history. Google stresses that you have to opt in to this feature, and it can be disabled at any time. Gemini will even display a banner to remind you it's connected to your search history so you don't forget. If you grant access, the AI can allegedly understand you better and offer more relevant recommendations. It feels a bit strange to turn Gemini loose on such personal data, but Google already knows what you look up on the Internet. You're not giving up much more if you let the robot have a peek. This is apparently just the start of Google's efforts to personalize the AI.
[2]
Gemini gets new coding and writing tools, plus AI-generated "podcasts
On the heels of its release of new Gemini models last week, Google has announced a pair of new features for its flagship AI product. Starting today, Gemini has a new Canvas feature that lets you draft, edit, and refine documents or code. Gemini is also getting Audio Overviews, a neat capability that first appeared in the company's NotebookLM product, but it's getting even more useful as part of Gemini. Canvas is similar (confusingly) to the OpenAI product of the same name. Canvas is available in the Gemini prompt bar on the web and mobile app. Simply upload a document and tell Gemini what you need to do with it. In Google's example, the user asks for a speech based on a PDF containing class notes. And just like that, Gemini spits out a document. Canvas lets you refine the AI-generated documents right inside Gemini. The writing tools available across the Google ecosystem, with options like suggested edits and different tones, are available inside the Gemini-based editor. If you want to do more edits or collaborate with others, you can export the document to Google Docs with a single click. Canvas is also adept at coding. Just ask, and Canvas can generate prototype web apps, Python scripts, HTML, and more. You can ask Gemini about the code, make alterations, and even preview your results in real time inside Gemini as you (or the AI) make changes. Audio Overviews is not actually a new feature -- it debuted last year as part of a Google product called NotebookLM. The gist is, you upload some documents, and the AI assimilates the data to generate a conversation between two people who don't exist. Google likens this to a podcast-style discussion, and that's a fair description of what you get. Sometimes, the fake hosts even give the fake podcast a name. To use Gemini's Audio Overviews, just upload a document and look for the "Generate Audio Overview" button above the prompt bar. Be warned, creating the audio takes several minutes even for a relatively small amount of text. This is similar to Audio Overviews in NotebookLM, but the feature has a little more to offer as part of Gemini. Audio Overviews is also integrated with Deep Research, the AI-powered agent that can peruse the Internet on your behalf. Google recently made Deep Research free for limited use, and now you can do more with those reports. When viewing the results of Deep Research (which also take several minutes to create), you'll now be able to generate an Audio Overview from the report. Google says both Canvas and Audio Overviews are available for all users globally -- yes, even the free version of Google's AI. However, Audio Overviews only works in English for now. The company promises more languages later.
[3]
Google wants Gemini to get to know you better | TechCrunch
In the AI chatbot wars, Google thinks the key to retaining users is serving up content they can't get elsewhere, like answers shaped by their internet habits. On Thursday, the company announced Gemini with personalization, a new "experimental capability" for its Gemini chatbot apps that lets Gemini draw on other Google apps and services to deliver customized responses. Gemini with personalization can tap a user's activities and preferences across Google's product ecosystem to deliver tailored answers to queries, according to Gemini product director Dave Citron. "These updates are all designed to make Gemini feel less like a tool and more like a natural extension of you, anticipating your needs with truly personalized assistance," Citron wrote in a blog post provided to TechCrunch. "Early testers have found Gemini with personalization helpful for brainstorming and getting personalized recommendations." Gemini with personalization, which will integrate with Google Search before expanding to additional Google services like Google Photos and YouTube in the months to come, arrives as chatbot makers including OpenAI attempt to differentiate their virtual assistants with unique and compelling functionality. OpenAI recently rolled out the ability for ChatGPT on macOS to directly edit code in supported apps, while Amazon is preparing to launch an "agentic" reimagining of Alexa. Citron said Gemini with personalization is powered by Google's experimental Gemini 2.0 Flash Thinking Experimental AI model, a so-called "reasoning" model that can determine whether personal data from a Google service, like a user's Search history, is likely to "enhance" an answer. Narrow questions informed by likes and dislikes, like "Where should I go on vacation this summer?" and "What would you suggest I learn as a new hobby?," will benefit the most, Citron continued. "For example, you can ask Gemini for restaurant recommendations and it will reference your recent food-related searches," he said, "or ask for travel advice and Gemini will respond based on destinations you've previously searched." If this all sounds like a privacy nightmare, well, it could be. It's not tough to imagine a scenario in which Gemini inadvertently airs someone's sensitive info. That's probably why Google is making Gemini with personalization opt-in -- and excluding users under the age of 18. Gemini will ask for permission before connecting to Google Search history and other apps, Citron said, and show which data sources were used to customize the bot's responses. "When you're using the personalization experiment, Gemini displays a clear banner with a link to easily disconnect your Search history," Citron said. "Gemini will only access your Search history when you've selected Gemini with personalization, when you've given Gemini permission to connect to your Search history, and when you have Web & App Activity on." Gemini with personalization will roll out to Gemini users on the web (except for Google Workspace and Google for Education customers) starting Thursday in the app's model drop-down menu and "gradually" come to mobile after that. It'll be available in over 40 languages in "the majority" of countries, Citron said, excluding the European Economic Area, Switzerland, and the U.K. Citron indicated that the feature may not be free forever. "Future usage limits may apply," he wrote in the blog post. "We'll continue to gather user feedback on the most useful applications of this capability." As added incentives to stick with Gemini, Google announced updated models, research capabilities, and app connectors for the platform. Subscribers to Gemini Advanced, Google's $20-per-month premium subscription, can now use a standalone version of 2.0 Flash Thinking Experimental that supports file attachments; integrations with apps like Google Calendar, Notes, and Tasks; and a 1-million-token context window. "Context window" refers to text that the model can consider at any given time -- 1 million tokens is equivalent to around 750,000 words. Google said that this latest version of 2.0 Flash Thinking Experimental is faster and more efficient than the model it is replacing, and can better handle prompts that involve multiple apps, like "Look up an easy cookie recipe on YouTube, add the ingredients to my shopping list, and find me grocery stores that are still open nearby." Perhaps in response to pressure from OpenAI and its newly launched tools for in-depth research, Google is also enhancing Deep Research, its Gemini feature that searches across the web to compile reports on a subject. Deep Research now exposes its "thinking" steps and uses 2.0 Flash Thinking Experimental as the default model, which should result in "higher-quality" reports that are more "detailed" and "insightful," Google said. Deep Research is now free to try for all Gemini users, and Google has increased usage limits for Gemini Advanced customers. Free Gemini users are also getting Gems, Google's topic-focused customizable chatbots within Gemini, which previously required a Gemini Advanced subscription. And in the coming weeks, all Gemini users will be able to interact with Google Photos to, for example, look up photos from a recent trip, Google said.
[4]
Google brings a 'canvas' feature to Gemini, plus Audio Overview
They say imitation is the sincerest form of flattery, and Google seems to agree. On Tuesday, the company added a feature to its AI-powered Gemini chatbot that the company is calling Canvas. Similar in concept to OpenAI's identically-named Canvas tool for ChatGPT and Anthropic's Artifacts, Canvas provides Gemini users with an interactive space where they can create, refine, and share writing and coding projects. "Canvas is designed for seamless collaboration with Gemini," Gemini product director Dave Citron wrote in a blog post shared with TechCrunch. "With these new features, Gemini is becoming an even more effective collaborator, helping you bring your ideas to life." Workspaces such as Gemini Canvas, ChatGPT Canvas, and Artifacts are the AI companies' latest attempt to transform their chatbot platforms into full-blown productivity suites. Dedicated workspaces can offer more precision than text-based interfaces alone, as well as provide a way to preview code in real-time. Gemini Canvas, which can be launched via the prompt bar from the Gemini app on the web and mobile, lets users draft lengthy messages with Gemini that they can then edit and fine-tune. Using Canvas, users can update specific sections of a draft and adjust the tone, length, and formatting via dedicated tools. "For example, highlight a paragraph and ask Gemini to make it more concise, professional, or informal," Citron explained in the blog post. "If you want to collaborate with others on the content you just made, you can export it to Google Docs with a click." As alluded to earlier, Canvas also packs programming-focused capabilities, including a feature that lets users generate and preview HTML, React code, and other web app prototypes. Users can ask Gemini to make changes to a preview, and Canvas will iteratively refresh it. "For example, say you want to create an email subscription form for your website," Citron wrote. "You can ask Gemini to generate the HTML for the form and then preview how it will appear and function within your web app." Along with Canvas, Google is bringing the Audio Overview feature of NotebookLM to Gemini, the company announced Tuesday. Google's NotebookLM went viral last year for Audio Overview, which creates realistic-sounding podcast-style audio summaries of documents, webpages, and other sources. As with Audio Overview in NotebookLM, Audio Overview in Gemini accepts files and content in a range of formats. Uploading a document via the prompt bar will trigger the Audio Overview shortcut, and once a summary is generated, it can be downloaded or shared via the Gemini app on the web or mobile. Both Canvas and Audio Overview are available for free to Gemini users worldwide as of Tuesday. Canvas' code preview feature is only on the web for now, however, and Audio Overview summaries are limited to English.
[5]
Google's Gemini AI Can Personalize Results Based on Your Search Queries
Samantha Kelly is a freelance writer with a focus on consumer technology, AI, social media, Big Tech, emerging trends and how they impact our everyday lives. Her work has been featured on CNN, NBC, NPR, the BBC, Mashable and more. Google's AI model Gemini is getting even more personal. The company announced this week that enabling a new personalization tool will allow the AI chatbot to reference your search history to enhance and deepen its responses. In a blog post, the company said it will analyze queries and cross-reference your searches to provide "contextually relevant responses that are adapted to your individual interests." Google said it's part of a broader strategy for Gemini to not only answer general questions but to better understand you. It added that it will only reference your search results when it determines they can meaningfully improve the output. "We'll only use your Search history when our advanced reasoning models determine that it's actually helpful," the company said in the blog post. "Early testers have found Gemini with personalization helpful for brainstorming and getting personalized recommendations. We'll continue to gather user feedback on the most useful applications of this capability." Some examples Google provided include: "Where should I go on vacation this summer?" or, "I want to start a YouTube channel but need content ideas," potentially tying the response to what it already knows about your interests. The effort aligns with a growing trend among tech companies, in particular ChatGPT-maker OpenAI, aiming to make AI more intuitive, context aware and have a deeper understanding of its users' needs and preferences. The tool - powered by its experimental Gemini 2.0 Flash Thinking model - will provide a outline of its reasoning and display which data it pulled from previous searches. Google also said it will explicitly ask for permission before connecting to your Search history or any other apps. It will initially launch as an experimental feature for Gemini and Gemini Advanced subscribers (web only for now) in over 45 languages and will expand to mobile over time. The company also announced it is rolling out its Deep Research tool to all Gemini users for free. The tool, announced in December, is similar to OpenAI's feature of the same name. It aims to save you hours of time by acting as a personal AI research assistant, searching and synthesizing information from across the web in minutes. Google said it is pairing its newly upgraded Deep Research with the Gemini 2.0 Flash Thinking model, which provides real-time reasoning while browsing the web, as part of an effort to enhance the quality of its reports.
[6]
Google Gemini Turns Your Docs Into Podcasts With AI Hosts
Samantha Kelly is a freelance writer with a focus on consumer technology, AI, social media, Big Tech, emerging trends and how they impact our everyday lives. Her work has been featured on CNN, NBC, NPR, the BBC, Mashable and more. Google is rolling out new tools for its AI model Gemini that turn class and meeting notes, documents, slides, email threads and reports into podcast-style conversations. The company said it is bringing its Audio Overviews feature in NoteBook LM -- its generative AI note taking tool -- to Gemini to help users better digest and make sense of complicated information. Two AI hosts are available to help make the conversation flow based on uploaded files, according to Google. They summarize the material, draw connections between topics, engage in back-and-forth discussions, and provide "unique perspectives." The new feature is rolling out to Gemini and Gemini Advanced subscribers worldwide in English, with more languages coming soon. The move comes as companies like Google and OpenAI continue to expand their suites of tools to develop AI that functions more like a personal assistant by helping users perform tasks and decision-making. Google is also introducing an interactive space known as Canvas for Gemini users to create, draft or refine documents and code. Canvas also provides users with feedback and previews in real time. For documents, users can adjust tone, length or formatting of the entire piece or specific parts, such as a paragraph of an essay or speech.
[7]
Gemini might soon have access to your Google Search history - if you let it
Would you be willing to give Gemini a peek at your search history? Recent reports indicate that Google is working on Gemini Personalization -- an AI model that would let you share your search history with Gemini to get more personalized results. Also: Gemini AI is coming to Google Calendar - here's what it can do and how to try it You'd be giving up a fair amount of privacy to use this model, but at least from early indications, it would make Gemini the most personal AI chatbot yet. Gemini would know quite a bit about you that you never had to tell it. Google hasn't officially acknowledged this feature, but Android Authority accessed a version deep in the code of the latest Google app (version 16.8.31), and its article showed several screenshots of the feature in action. To start, Gemini says that only the model Gemini Personalization will connect to your search history and that chats in this model aren't used to improve Gemini, aren't stored outside your chat history, and are deleted from your activity within 60 days. You'll need to give Gemini permission to see your search history and turn on Web & App Activity in your Google settings. Once you've done that, you can ask Gemini about your past searches or ask a question that Gemini can answer with what it knows about you from those searches. Also: Goodbye Gemini, hello Pixel Sense? What we know about Google's AI assistant for Pixel 10 You can potentially ask Gemini base-level questions like "What was that restaurant I was searching for last week?" or "Show me the tourist sites I Googled in NYC," but you could also ask questions like, "I'm going to NYC. Can you give me restaurant recommendations using my search history?" and get a reply like, "Your search history tells me you like finding locally owned Italian restaurants. There's a highly rated one..." A potential question could also go something like, "Give me some recommendations for shoes based on all the ones I've searched for the past two months." It's not clear how much of your search history Gemini will be able to access. In Android Authority's APK teardown, Gemini was able to reference the past several months of history, going back as far as January. The several-month limit may be the final form of the feature, or it may just be the limit for the testing version. Also: 5 easy Gemini settings tweaks to protect your privacy from AI The fact that Gemini Personalization can access history earlier than the point you turned it on is big, but if it can ultimately access more history, this becomes even more of a game changer. I was able to easily find more than 12 years of my Google Search history on my Google Activity page, so there's no reason Gemini can't too.
[8]
Gemini can now personalize its answers based on your search history
Emma Roth is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO. With so many AI companies launching chatbots, Google is leveraging its biggest competitive advantage to make Gemini stand out: Search. With personalization enabled, Gemini can now automatically analyze your query to see if referring to your Search history can "enhance" its response. The feature is powered by the Gemini 2.0 Flash Thinking Experimental model, and it will only reference your search results if its AI model finds it "helpful." For instance, if you ask Gemini about restaurant or travel recommendations, the chatbot will refer to your recent food-related searches to provide a suggestion. This is part of the broader personalization feature Google is rolling out, which will eventually connect Gemini to other apps like YouTube and Google Photos, allowing the chatbot to "provide more personalized insights, drawing from a broader understanding of your activities and preferences." Google notes that you can disconnect your search history from Gemini at any time. When you receive a response, you'll see an outline of how Gemini got its answers, as well as whether it referenced your saved information, past conversations, or Search history. It will also display a "clear banner" with a link to disconnect your Search history. Gemini and Gemini Advanced subscribers on the web can enable the feature by selecting "Personalization (experimental)" from the model drop-down menu. It's gradually rolling out on mobile, and is available in more than 40 languages in a "majority" of countries. Google is releasing some other updates as well, including a way for all Gemini users to create their own personal AI assistants -- called Gems -- for free. The company also announced that it's bringing its Gemini 2.0 Flash Thinking Experimental model to its Deep Research feature, which the company says improves the chatbot's capabilities "across all research stages." Gemini's integrations with Calendar, Notes, Tasks, and Photos are also getting an upgrade to the Gemini 2.0 Flash Thinking Experimental model, joining YouTube, Search, and Google Maps.
[9]
Google launches Gemini with Personalization, beating Apple to personal AI
Google Gemini has introduced a conversational AI assistant to understand users' needs conversationally. But what if, in addition to understanding what you say, it could also understand you from your everyday interactions with your device? Now it can. Also: AI agents aren't just assistants: How they're changing the future of work today On Thursday, Google unveiled Gemini with Personalization, an experimental, opt-in experience that allows users to connect Gemini to their Search history to better inform the assistant's responses. In the coming months, users will also be able to connect to Google Photos and YouTube. Gemini with Personalization, powered by an experimental Gemini 2.0 Flash Thinking model, needs to be manually turned on by interested users to connect to their Search history. Specifically, users must go to Gemini Apps and select the "Personalization (experimental)" model from the drop-down menu. Also: Google claims Gemma 3 reaches 98% of DeepSeek's accuracy - using only one GPU Once activated, Gemini will analyze Search history before generating a response to see if it can add context to the answer. The Search history will only be used if Google's advanced models determine that it can enhance the response. Since this is experimental, Google says it will continue to gather user feedback to make it as useful as possible. Clearly, having an AI model examine your Search history will raise some privacy concerns. To address these apprehensions, Google reassures users that they are in control and can easily disconnect Gemini from their Search history at any time. Also: Even premium AI tools distort the news and fabricate links - these are the worst Moreover, when using the experiment, Gemini will display a clear banner with a link that lets you easily unlink your Search history. Because it is accessing your Search history, you can also edit your history as you regularly would, adding another layer of control. The company also says there will be clear notice before linking Gemini to your Search history, asking for permission before connecting to apps or your Search history, with a pop-up message seen below. Gemini will also show a full outline of the data sources it used in its response, including whether it informed its answer from your past chats or your Search history. Gemini with Personalization is available as an experimental feature to Gemini and Gemini Advanced subscribers on the web starting today, with a gradual rollout planned for mobile. The feature is unavailable to users under 18, Google Workspace, and Education users. Also: Can AI supercharge creativity without stealing from artists? This update is significant because, when it has access to apps, Gemini can act as more of a personal assistant that is seamlessly connected to your everyday workflow. Apple wanted to achieve something similar with Apple Intelligence, which, in its full form, is meant to be a "personal intelligence" system grounded in your personal information and context from apps and your screen. However, Apple Intelligence is nowhere near a personal assistant. Its features are limited to Genmoji, Image Playground, notification summaries, writing tools, voicemail transcriptions, Visual Intelligence, and ChatGPT integration. The company also recently confirmed that the highly anticipated Siri upgrades will take longer than expected to be delivered to the public. Also: Goodbye Gemini, hello Pixel Sense? What we know about Google's AI assistant for Pixel 10 Along with the launch of the Personalization feature, Google also announced a slew of other features for Gemini, including expanded access to Deep Research and Gems.
[10]
Google Gemini just made two of its best features available for free
Google went from being a dark horse in the AI space to a frontrunner with its Gemini offerings. To keep up that momentum, Google is announcing a slew of Gemini updates that will enhance the AI assistant's overall experience -- including for free users. Also: AI agents aren't just assistants: How they're changing the future of work today On Thursday, Google announced that it was making two of its most popular features available to all users: Deep Research and Gems. Both of these features will allow users to unlock another level of Gemini assistance, even adding agentic capabilities. Deep Research is an agentic feature that can conduct thorough research on your behalf by creating a multi-step research plan, browsing the web extensively for you for a couple of minutes, and then creating a comprehensive report. It is a powerful tool for research because it can complete in-depth research in minutes that could otherwise take users hours. To build on the already powerful experience even further, Google upgraded Deep Research with Gemini 2.0 Flash Thinking Experimental, which should result in even higher-quality, multi-page reports. Also: Google claims Gemma 3 reaches 98% of DeepSeek's accuracy - using only one GPU With the update, Google also shows its thoughts while browsing the web. Despite the updates, everyone will now be able to try it at no cost simply by selecting Deep Research in the new prompt bar or model dropdown, according to the blog post. When launched in January, the feature was only available to Gemini Advanced users, part of the Google One AI Premium plan, costing users $20 per month. Gems in Gemini allows users to create their own personalized AI experts on any topic by customizing AI chatbots for their specific needs without extensive coding or machine learning training skills. The feature was only available to Google One AI Premium Plan subscribers when launched. Now, Gems is beginning to roll out for free to everyone in the Gemini app. Also: Google's Gems are a gentle introduction to AI prompt engineering To set up a Gem, a user simply has to give it an instruction, name it, and use it when needed to perform a specific function. Some use cases for Gems are personalizing them to tackle repetitive tasks such as being a coding partner, writing editor, or learning coach. The feature is nearly identical to ChatGPT's custom GPTs, which can also be instructed to perform a function, be named, and shared with others. At the moment, only subscribers to OpenAI's paid plans, including the $20 per month ChatGPT Plus subscription, can create their own GPTs. In addition to these upgrades, Google also launched Gemini with Personalization, an experimental, opt-in experience that allows users to connect Gemini to their Search history to better inform the assistant's responses.
[11]
Gemini just got ChatGPT's best productivity feature, plus Audio Overviews
You can now edit documents in real time and generate Audio Overviews, all from the Gemini interface. One of the best uses for AI chatbots, such as Gemini or ChatGPT, is editing existing content, such as text or code. This ensures the essence of your work is preserved while still getting the polishing it needs. However, the standard chatbot interface makes it difficult to keep track of those changes, with previous versions getting lost in the chat. Also: Google is in trouble but this could change everything - and no, it's not AI For that reason, OpenAI last October launched Canvas for ChatGPT, which gives users access to a new interface that makes co-editing easier and more efficient. Now the feature is finally coming to Gemini. Canvas To activate Canvas in Gemini, users simply select Canvas in the prompt bar. Once Canvas is selected, users view the changes happening in real-time as their original content is still viewable. Although it can be difficult to visualize without trying it for yourself, the major difference is that the changes happen to the existing content instead of the chatbot adding the changes to a new version placed right below it. This feature is helpful because you can see exactly what was changed and how it fits into the rest of the existing content. Gemini will also give users feedback and edit suggestions you can revise. The feature also gives users more control over making additional changes, as you can highlight the content you want to change and use quick editing tools to change the tone, length, or formatting according to the release. Of course, the feature is useful for coding tasks beyond text, as users can collaborate with Gemini to edit and receive feedback on their code. An added Canvas functionality for coding tasks is the ability to view previews of HTML/React code and other web app prototypes. This allows users to see a visual of their design, making it easier to edit and tweak. Also: Why Canvas is ChatGPT's best productivity feature for power users Canvas is rolling out globally to Gemini and Gemini Advanced subscribers starting today. Audio Overviews One of Google's most popular AI features has been Audio Overviews, which allows users to transform their content into podcasts between two AI hosts with the touch of a button. However, that feature has lived in NotebookLM, another experimental Google AI offering, instead of Gemini -- until now. Also: Google's stunning AI podcast tool gets new features that make it even better Starting today, Audio Overview is rolling out to Gemini and Gemini Advanced subscribers globally in English, with more languages coming soon. To access the feature, users can upload documents or slides and click the suggestion chip that pops up above the prompt bar on both the mobile app and the web. Some helpful use cases for the feature include uploading dense or difficult-to-understand materials, as Audio Overview can synthesize them for you engagingly, with a dynamic conversation that deep-dives into the topic.
[12]
Gemini adds new coding feature and AI audio summaries
Kylie Robison is a senior AI reporter working with The Verge's policy and tech teams. She previously worked at Fortune Magazine and Business Insider. Google just released two new features for its Gemini AI assistant: Canvas and Audio Overviews. Canvas introduces a dedicated workspace within Gemini where users can create and refine both documents and code in real-time. Users can spin up initial drafts and then work with Gemini to edit specific sections, adjust tone, or reformat content as needed. For coding projects, Canvas includes a live preview alongside the code so users can iteratively edit while watching it evolve as you make the changes. The second feature, Audio Overview, converts written materials (like documents or slides) into a "podcast-style discussion between two AI hosts." This functionality was previously available in Google's NotebookLM. (The team responsible for creating NotebookLM left in December to launch their own startup). Both features are rolling out globally today for Gemini and Gemini Advanced subscribers, though Audio Overview is currently only available in English, with more language support planned, the company said in a blog post on Tuesday. AI competitors like Anthropic and OpenAI have similar features -- called Projects and Canvas respectively. The naming conventions for these features have gotten competitive as well. Google launched a feature called Deep Research, where AI conducts research on a topic for you, in December. Then, OpenAI released a similar feature with the same name in February. In October, OpenAI released a feature for writing and coding projects called Canvas. Now, Google is releasing the same feature under the same name too.
[13]
Google's Gemini Deep Research is now available to everyone
After being one of the first companies to roll out a Deep Research feature at the end of last year, Google is now making that same tool available to everyone. Starting today, Gemini users can try Deep Research for free in more than 45 languages -- no Gemini Advanced subscription necessary. For the uninitiated, Deep Research allows you to ask Gemini to create comprehensive but easy-to-read reports on complex topics. Compared to say Google's new AI Mode, Deep Research works slower than your typical chatbot, and that's by design. Gemini will first create a research plan before it begins searching the web for information that may be relevant to your prompt. When Google first announced Deep Research, it was powered by the company's powerful but expensive Gemini 1.5 Pro model. With today's expansion, Google has upgraded Deep Research to run on its new Gemini 2.0 Flash Thinking Experimental model -- that's mouthful of a name that just means it's a chain-of-thought system that can break down problems into a series of intermediate steps. "This enhances Gemini's capabilities across all research stages -- from planning and searching to reasoning, analyzing and reporting -- creating higher-quality, multi-page reports that are more detailed and insightful," Google says of the upgrade. If Deep Research sounds familiar, it's because of a variety of chatbots now offer the feature, including ChatGPT. Google, however, has been ahead of the curve. Not only was it one of the first to offer the tool, but it's now also making it widely available to all of its users ahead of competitors like OpenAI. Separately, Google announced today the rollout of a new experimental feature it calls Gemini with personalization. The same Flash Thinking model that is allowing the company to bring Deep Research to more people will also allow Gemini to inform its responses based on information from Google apps and services you use. "With your permission, Gemini can now tailor its responses based on your past searches, saving you time and delivering more precise answers," says Google. In the coming months, Gemini will be able to pull context from additional Google services, including Photos and YouTube. "This will enable Gemini to provide more personalized insights, drawing from a broader understanding of your activities and preferences to deliver responses that truly resonate with you." To enable the feature, select "Personalization (experimental)" from the model drop-down menu in the Gemini Apps interface. Google explains Gemini will only leverage your Search history when it determines that information may be useful. A banner with a link will allow you to easily turn off the feature if you find it's invasive. Gemini and Gemini Advanced users can begin using this feature on the web starting today, with mobile availability to follow.
[14]
Google allows users to personalize their Gemini conversations with new features
Google is allowing users to personalize its Gemini chatbot with new features, which the company rolled out Thursday in experimental mode. Gemini can now reference users' Google Search histories to understand them better and give more relevant recommendations, the company wrote in a blog post Thursday. It's an opt-in feature that allows the company to use search history data within its conversational AI. The company also said users can now connect their respective apps to Gemini, including Calendar, Notes, Tasks and Photos. Google will also make Gems, its custom AI helper for tasks, more broadly available to all users. Gems "lets you customize Gemini to create your own personal AI expert on any topic," the company wrote in the post. The latest features come as Google executives try to "close the gap" and establish leadership in an increasingly competitive AI industry. DeepMind co-founder Demis Hassabis told internal teams in December he wants the company to "turbocharge" the Gemini app this year and that scaling Gemini on the consumer side will be "our biggest focus next year," CNBC reported. Google Wednesday launched its open-source Gemma 3 models, intended for use by developers creating AI applications and with the capability to analyze text, images and short videos. The company called it "the world's best single-accelerator model" that can run on one GPU. Hassabis has talked about the competition with Chinese AI startup DeepSeek, telling employees at a February all-hands meeting that the reported cost of DeepSeek's AI training was likely "only a tiny fraction" of the total cost of developing its systems. He also said DeepSeek probably used a lot more hardware than it let on and that it relied on Western AI models. Google on Wednesday also debuted two new AI models, Gemini Robotics and Gemini Robotics-ER (extended reasoning). They both run on Gemini 2.0, which Google calls its "most capable" AI to date.
[15]
Google's Gemini can now search your browser history to offer personalized responses
A hot potato: In a move that makes you wonder if Google paid any attention at all to Microsoft's Recall controversy, the company has announced an update to the Gemini 2.0 model that includes the ability to browse your Search history to offer more personalized responses. Google's experimental Gemini 2.0 Flash Thinking model allows Gemini to connect with various apps and services, including Search. The idea is that it will be able to give a much more personalized, tailored response to user queries based on what you looked at online. Google gave examples of the kinds of questions where this personalization feature would be helpful: Where to go on holiday this summer, content ideas for a YouTube channel, and suggestions for a new hobby or job. By looking at a user's Search history, Gemini was able to answer the vacation question with, "Considering your recent searches for places like Hawaii and the Maldives, you seem to enjoy tropical destinations. You also looked into family-friendly trips to Chicago, Seattle, and Kyoto, suggesting an interest in city and international travel with your family. Your searches for Yosemite and Antelope Canyon point towards an appreciation for nature and unique landscapes." There are obviously going to be plenty of privacy concerns over this feature - nobody wants Gemini to suggest therapy as a new hobby based on their Search history. Google says Gemini will ask for permission before connecting to your Search history or any other apps. Gemini also displays a banner with a link to easily disconnect your Search history. Furthermore, Gemini will only use this feature when you select the AI model with personalization, give it permission to connect to Search, and you have Web & App Activity turned on. However, even the fact that users have to opt in to this feature is unlikely to make people less hostile towards it. Elsewhere, Google says it is rolling out Gems, custom chatbots that allow users to create their own AI expert on any topic, for everyone at no cost in the Gemini app. DeepResearch, which creates detailed reports on queries, is also being made free to use for everyone. Google's browser history search feature brings to mind Microsoft Recall, though it's not as invasive, admittedly. Recall, you might remember, was blasted by pretty much everybody for taking screenshots of the Windows desktop every few seconds, using the on-device large language model to scan, store, and process information. Microsoft said there was a filter that stopped Recall capturing sensitive information, but it didn't really work. The fact that users were initially required to have it enabled by default made a bad situation worse. Microsoft postponed the rollout of Recall, and it remains in preview to Copilot+ PCs through the Windows Insider Program.
[16]
Gemini can now peek at your Google Search history to personalize responses
Google reassures that the feature is optional and can be disabled at any time. It requires explicit permission from users and Search history is only accessed when required. Google is rolling out several updates to Gemini today, with the most notable being the introduction of "Gemini with personalization." This marks another step in Google's shift away from Assistant, reinforcing Gemini as your new "personal AI assistant." With this update, Gemini is gaining deeper integration with Google apps and services, starting with Google Search. Powered by Google's experimental Gemini 2.0 Flash Thinking model, the feature allows Gemini to tailor its responses based on your Google Search history, but only if you choose to grant access. For example, if you ask Gemini for restaurant recommendations, it can refer to your recent food-related searches. If you seek travel advice, Gemini will consider destinations you've looked up before, making its responses more relevant to you. According to Google, early testers have found Gemini with personalization helpful for brainstorming ideas and receiving customized recommendations. Of course, giving an AI model control of your Google Search data might be disconcerting and raises privacy concerns. However, Google emphasizes that you will remain in control of your data at all times. You can easily disconnect Gemini from your Search history whenever you choose. When the feature is active, a clear banner will appear, providing a direct link to disable it. Gemini will also always ask for permission before accessing your Search history or other apps. To use the feature, you must also explicitly grant permission and have Web & App Activity enabled. Google assures users that Search history will only be referenced when its advanced reasoning models determine it to be genuinely beneficial.
[17]
Gemini Will Mine Your Google Search History to Know Your True Self
This seemed inevitable. You can't have a personal assistant if they don't have access to everything you do. Google would like for you to use Gemini more often. Today, the company announced that you can opt to have Gemini read your search history when prompting it for help. Google says the upside of this is that it'll give you more personalized results when you interact with Gemini. "This will enable Gemini to provide more personalized insights, drawing from a broader understanding of your activities and preferences to deliver responses that truly resonate with you," writes Google. I sense people are going to feel scandalized by this news. Why wouldn't they? The idea that a computer reads all your emails and search queries to determine how to serve you best sounds dystopian, like nanny technology. However, consider this no different from Gmail, which has been serving ads based on your inbox content for over 20 years. Once folks catch on to this utility, it won't feel like an invasion of privacy. At the very least, it's not serving you more ads! Google says it will only use your Search history when its advanced reasoning models "determine that it's actually helpful." Prompts you can ask include "Where should I go on vacation?" and "I want to start a YouTube channel but need content ideas." You can even ask Gemini what to pursue as a hobby. Then, when you're over it, you can disconnect your search history from being linked to Gemini and continue unbothered. If you want to turn it on to try it yourself, you can enable it from the Gemini app in the browser. Just make sure you're a Gemini or Gemini Advanced subscriber. I asked Gemini to help me develop content for a YouTube channel; it was spot on. Gemini suggested that I make content about smartphones and Android devices, naturally. However, it also mentioned Tamagotchi and the retro tech I've been buying recently on eBayâ€"I'm researching reviving old cell phones from the 2000s. Gemini even suggested I do content on parenting and mental health; that one felt a little intense. Gemini put it like this: "Your searches related to perimenopause, anxiety, and gentle parenting suggest an interest in mental health topics. You could share your experiences or discuss helpful resources." If you do end up turning on this personalization featureâ€"I will keep it on for a while because I signed up for the duty to be your faithful guinea pig for most things Googleâ€"you're going to want to make sure that any search that you do for something that you do not wish AI to bring up ever in your life needs to be away from any Google entities. If you're on your phone or desktop, I highly suggest a search engine like DuckDuckGo for those spicy searches. In addition to search queries becoming part of the Gemini experience, Google is rolling out Gems for everyone today, even users who don't subscribe to Gemini Advanced. DeepResearch has also been made free to the public after being locked off to paid users. The Gemini 2.0 Flash Thinking Experimental model has also been added to Gemini to help supercharge it. Gemini will improve at handling cross-app requests between Google's different services, like Calendar and Photos.
[18]
Audio Overviews bring one of Google's most impressive AI tools to Gemini
Now you can generate the same kind of Audio Overviews straight from Gemini. Of all the neat tricks that modern AI is capable of, one of its most useful, and arguably least controversial, has been its ability to concisely summarize documents: taking one big pile of text, and shrinking it down into a much more manageable parcel. And while that's cool enough with the written word, last year we saw Google really step up to take things to the next level, introducing NotebookLM, a tool for generating podcast-like audio summaries. For as powerful as it can be, though, not enough people use NotebookLM -- which is one of the reasons we're very excited to learn that Google is now bringing NotebookLM-style Audio Overviews to Gemini itself.
[19]
Another useful Gemini Advanced feature is now available for free
Google offers several premade Gems to help you get started, or you can create your own in the Gems manager within Gemini on desktop. Google introduced a new Gemini feature dubbed Gems in August last year, allowing users to create custom versions of the chatbot tuned for specific use cases. So far, the ability to create custom Gems has been limited to Gemini Advanced subscribers. However, Google is now bringing the feature to users on the free tier. Gemini Gems are essentially AI experts that can help you with a predefined topic. Google offers a couple of premade Gems, like a Chess champ, brainstormer, career guide, and coding partner, that you can use to get a feel of the feature before setting up your own. You can even customize the premade Gems to align with your goals, and many Gemini Advanced subscribers have found the feature to be incredibly useful. Now that the feature is available on the free tier, you can try it out by heading to the Gem manager in Gemini for desktop. You can select one of the premade Gems or set up your own using the New Gem button and customize it per your need. You can even upload files while creating a custom Gem, which will serve as reference to help the chatbot offer more relevant responses. In addition to extending Gems to Gemini users on the free tier, Google has also removed the subscription requirement for the Deep Research tool. The company has also debuted a new model that can access your Google Search history to offer better responses and a new Google Photos integration.
[20]
Google Gemini Just Got Another Big Upgrade
Summary Gemini has a new personalization feature based on your search history, enhancing responses. Google will also expand Gemini's personalization capabilities with the integration of other services in the future. The Gemini 2.0 Flash Thinking model with improved functionality available to all users. Gemini, Google's very own AI chatbot, has received a slew of improvements recently. And it's only getting better. Google has just announced a bunch of new stuff for Gemini -- some of it for advanced users only, though you'll also find additions for free users. Gemini has just launched its latest batch of updates, of which the biggest one is perhaps a new "Gemini with personalization" model. This new model is basically a further sweetened version of Gemini's 2.0 models that allows the chatbot to analyze user search history to refine its responses. You can activate this feature by selecting "Personalization (experimental)" from the model drop-down menu within the Gemini app. And upon activation, Gemini will assess user prompts and determine if incorporating search history can enhance the response. According to Google, some of its early testing suggests that the personalization feature is beneficial for brainstorming and generating personalized recommendations, and the company intends to keep gathering user feedback to further refine the feature's usefulness. Examples of prompts that would benefit from this new personalization feature include inquiries about vacation destinations, YouTube content ideas, and hobby or career suggestions. If you ask Gemini about potential vacation destinations, and you've already recently searched how much plane tickets to Puerto Rico go for, it could suggest Puerto Rico or other Caribbean destinations among its options. The gist of it is that Google has a rough idea of your likes and dislikes based on your search results and how you've interacted with them, so it can use some of that info to write more helpful responses. Of course, it won't be good for everything -- after all, we search tons of stuff, not just stuff we like. That's why it's a separate model and not integrated with the main Gemini 2.0 models. But it has pretty cool potential nonetheless. The company also plans to expand Gemini's personalization capabilities in the coming months by integrating with other Google services, such as Photos and YouTube. It's not the only changes that are being announced. Google also says that the 2.0 Flash Thinking model, currently in an experimental phase, is now available to try for all users with additions such as file uploading as well as better performance and reasoning capabilities. This is the model that can do multi-step reasoning, like ChatGPT's o models, to try and come up with more accurate answers even if they might take a tad longer. There's also a version of this model that can interact with apps like YouTube and Maps. And if you're a Gemini Advanced user who has been using this for a while, you will now have access to a 1 million token context window. A larger context window allows the AI to process and understand significantly larger amounts of information at once, leading to more nuanced and accurate results, especially for complex research tasks. Finally, the "Gems" feature, which allows you to customize Gemini into a personalized AI expert on specific topics, is also being made available to all users at no cost. This feature was previously restricted to Gemini Advanced users. Gems are Google's equivalent to ChatGPT's GPTs, and they enable users to tailor the AI's responses and expertise to their particular needs by "pre-training" a version of the model so it responds to prompts in a specific way. These changes should now be live, but they might take a while to land for some people, so don't be surprised if you don't see them right now. Source: Google (1, 2)
[21]
Gemini's new history-crawling model is rolling out to all with Deep Research in tow
Summary Gemini is expanding access to its advanced features, including Flash Thinking and Deep Research, to free users, while also offering an expanded context window for paying users. A new Personalization model has been launched, leveraging user search history to tailor responses, with plans to integrate more Google services like YouTube and Google Photos. Gemini's Deep Research is now powered by Flash Thinking, providing greater transparency with detailed reasoning, and app integrations like Calendar and Maps will soon utilize Flash Thinking for more complex multi-app actions. Google today expanded the availability of several Gemini features that were previously locked behind an Advanced subscription, all while rolling out a new model that had previously only been spotted in passing. The updates, including the previously-leaked Personalization model, are rolling out now. For me, they've begun appearing on the web version of Gemini. Availability on the mobile app often lags behind by a few days, so keep checking regularly. Related This upcoming Gemini model could tap into your search history for personalized replies The model will aptly be named 'Personalization' Posts For starters, Google's Gemini 2.0 Flash Thinking Experimental model, which was first unveiled in early February, now supports file uploads. Additionally, even though the model is available to non-paying users, Google is rolling out an expanded 1 million token context window for users that pay for Gemini Advanced. For those unaware, Flash Thinking allows users to gain a deeper understanding of Gemini's thought process and reasoning, complete with all the steps the AI model takes to respond with an answer to your query. While not beneficial for all users, Flash Thinking does offer a greater level of transparency than other models. Close Elsewhere, Gemini's Deep Research model, which first started rolling out in December 2024, is now being upgraded with Gemini 2.0 Flash Thinking Experimental at its core. For reference, in its initial implementation, Deep Research helped users...well... deeply research the topic at hand. It goes through several reliable websites to formulate an answer to your query. Now, with Gemini 2.0 Flash Thinking powering the model, Deep Research does all of the same while also laying down its thought process and reasoning. Non-paying users can use Deep Research for five detailed reports per month. Multi-app Gemini requests are coming soon Close Late last week, some users in beta started gaining access to the then-unannounced Personalization model. Google has made the new model official today, and as its name suggests, it can answer queries in a manner that is tailored to your needs. The new model does this by looking into your search history -- only if you give it prior permission, though. The model is powered by Gemini 2.0 Flash Thinking Experimental, and at first, it will only be able to access your search history. Down the line, Google aims to extend the integration to other services like YouTube and Google Photos. "Your Search history can help Gemini Apps understand you and your interests, and give you more personalized and helpful responses," reads the model's description. Further, Google Photos will soon be an integrated Gemini app, which means you'll be able to directly probe it about the media in your gallery. Existing app integrations like Calendar, Notes, Maps, Tasks, and more, on the other hand, will soon be powered by Gemini 2.0 Flash Thinking Experimental, which should allow users to request actions that involve multiple apps. "Look up an easy cookie recipe on YouTube, add the ingredients to my shopping list and find me grocery stores that are still open nearby," for example.
[22]
Gemini's New Canvas Feature is The Ultimate Writing/Coding Buddy
Summary Gemini's Canvas adds an interactive workspace to Gemini chatbot, making it easier to edit code snippets and preview them. Canvas helps improve writing by offering tone, length, or formatting suggestions and allows easy collaboration and export to Google Docs. Google introduces Audio Overviews, where AI hosts summarize content and present unique perspectives in audio format. A lot of people have turned to using chatbots to help them code. While you probably don't want a whole program written by AI, it can help you come up with solutions in specific snippets so your code is more efficient and functional. If you're going to do that, though, you might as well use it with Gemini's Canvas -- it will make it way easier to check if that code actually works. Google's Gemini chatbot is getting a couple of really cool additions. The most prominent change we have here is Canvas, a new interactive workspace integrated directly within Gemini. It's an editable window that pops up right alongside the chat interface, where Gemini will put either its text output or its code output and let you tweak it. And it has tons of editing options. If you're going to use it for documents, you can either tell it to come up with an initial draft, or you can give Gemini what you have already written so it can help you tweak it. Gemini can analyze highlighted sections of text and offer suggestions to alter the tone (making it more concise, professional, or informal), adjust the length, or modify the formatting. Since it's all being output to an editable window, fixing mistakes made by AI is as simple as just rewriting bits yourself (or highlighting those bits so the AI can fix them). And when you do have something that looks good, you can export that to Google Docs. Canvas not only provides an editable window where it outputs any code it makes, similar to how it does with documents, but it also has a Preview tab that lets you actually preview and check if that code works. Your code can be run right on the Preview tab to see how your code will appear and function in a real-world context, letting you see how something works without having to deploy it or use an IDE. For instance, a user can ask Gemini to generate the HTML for an email subscription form and then instantly preview how it looks. Further changes, such as adding input fields or call-to-action buttons, can be requested and previewed in real time as well. Related Google Gemini Just Got Another Big Upgrade An updated model for everyone, search history integration, and a whole lot more. Posts Close As it tends to be the case, it can sometimes output pretty broken code. For instance, I tried to push it to the limit by getting it to code things like simple HTML/JavaScript games, and while they looked pretty coherent, they were also very broken -- it wrote a Super Mario Bros-style platformer game where the character couldn't jump. This is an edge case, of course, but the cool thing about Canvas is that you can test out whether parts of your code are broken and promptly fix it if it's necessary, either by debugging it yourself or asking the AI for help debugging by highlighting specific code sections. For the early preview we used, Canvas could do HTML, CSS, JavaScript, and React, but Google tells us it will also be compatible with most of the code Gemini is capable of doing, including Python. Canvas will be desktop-only at launch, but Google says a mobile experience should be out by the end of this month as well. And it's only available for the Gemini 2.0 Flash model at this time, though Google says it should eventually be available for all models, including the Thinking and Deep Research models, down the road -- no specific timeline for that, though. In addition to Canvas, Google is also introducing "Audio Overviews," a feature initially seen in NotebookLM and now available in Gemini. Audio Overview works by creating a virtual discussion between two AI hosts. These hosts analyze uploaded files such as documents, slides, or even Deep Research reports, and engage in conversation about the content. They summarize key points, draw connections between different topics, and offer unique perspectives. It's a bit of a silly concept, but a lot of people like listening to podcasts and learn stuff pretty well by listening to them, so this tries to take a similar concept to learning in general. You can feed it class notes, research papers, lengthy email threads, or reports and receive a summarized audio version that you can listen to on the go. As long as it digests the content properly (again, always double-check everything an AI gives you as it can sometimes hallucinate), it's a pretty cool tool for studying. Both Canvas and Audio Overviews are rolling out from today for both free and paid users. Canvas is available for users in all languages, while Audio Overviews will, at first, only be available in English. Source: Google
[23]
NotebookLM's Audio Overviews find a new home within Gemini
Summary NotebookLM's Audio Overview feature, which creates podcast-style audio discussions from research, is now integrated directly into Google Gemini on web and mobile apps, making it more accessible for auditory learners. Audio Overview in Gemini is powered by the free-to-use Deep Research model, allowing all users to generate audio summaries, though non-paying users are limited to five Deep Research reports (and thus audio overviews) per month. In addition to Audio Overview, Gemini also gained a new feature called Canvas, an interactive workspace for real-time document editing and code prototyping with AI assistance. Google's Gemini-powered NotebookLM is a great learning tool for auditory learners who understand topics and themes better when heard rather than simply read. The tool, which is already easily accessible via notebooklm.google.com is now even easier to access, all thanks to it finally being directly integrated within Gemini on both the mobile apps and the web. The development comes soon after Google announced that it was expanding access to several Gemini Advanced features for free users, including support for Gemini Flash Thinking, Deep Research, and Personalization. Related Gemini's new history-crawling model is rolling out to all with Deep Research in tow New features and expanded availability? Count us in Posts The new tool, aptly named Audio Overviews, functions similarly to its implementation on NotebookLM, albeit with a few modifications. For starters, Audio Overviews are (at least currently) only available via the Gemini Deep Research model. The model is free-to-use for all, which means you will be able to generate Audio Overviews for topics that you dig into without having a Gemini Advanced subscription. It's worth noting, though, that non-paying users can only use Deep Research for five detailed reports per month, which, in turn, also limits the number of Audio Overviews users can generate. You can begin playing around with Gemini Audio Overviews today To generate podcast-like spoken dialogue discussion for topics, head to Gemini and select the Deep Research model from the model selector. Type in your query and 'Start research.' This step should take a few minutes to complete. Once completed, you'll be presented with a comprehensive overview of your research topic, complete with the option to pose follow-up questions. You'll also see an option to Export to Docs, as seen in the fourth slide below, though we're more interested in the arrow adjacent to it. Tap/click the arrow, and you'll be presented with the option to Generate an Audio Overview. Once generated, you can listen to the overview within Gemini, or download it for later. Audio Overviews within Gemini are rolling out now to users globally, albeit only in English. The feature is available to me on the web, but as is often the case, support for the mobile app seems to be lagging behind. Close Elsewhere, with the same 'feature drop,' Gemini today also gained Canvas, a tool that Google described as an "interactive space within Gemini designed to make creating, refining and sharing your work easy." The tool, which can help with real-time document editing and code prototyping. The tool offers feedback and suggests edits for your drafts, complete with editing tools to "adjust the tone, length or formatting" of your written content. Similarly, Canvas can also help users learn to code by bringing their ideas to life. "Canvas empowers developers to quickly create initial working versions of their projects and provides a space for students to quickly learn coding concepts," wrote Google. Canvas for coding and real-time document editing is rolling out to all users in all languages supported by Gemini Apps.
[24]
Gemini app rolling out writing & coding 'Canvas,' Audio Overview podcasts
Following last week's model updates, Google is rolling out Audio Overviews and a new Canvas tool to the Gemini app. As popularized by NotebookLM and Daily Listen, Audio Overviews are coming to the Gemini app. When you upload documents or slides, there will be a new suggestion chip above the Ask Gemini prompt bar. You can also generate a podcast-style discussion between two AI hosts from a Deep Research report. Gemini will "summarize the material, draw connections between topics, engage in a dynamic back-and-forth and provide unique perspectives." Google reminds users today that Audio Overviews "reflect the sources you upload or are generated in Deep Research," and are "not comprehensive or objective views of a topic." Audio Overviews are available in the Gemini app and gemini.google.com, with the ability to share and download them. They are rolling out starting today to free Gemini and Advanced users around the world "in English, with more languages coming soon." Meanwhile, Gemini is also adding a "Canvas" feature that gives you an "interactive space to create and edit text documents "with changes appearing in real-time." On desktop, a new "Canvas" button will appear in the prompt bar alongside Deep Research. Upon entering a prompt, you'll switch to a dual-pane UI on desktop with the chat appearing at the left and the Canvas next to it. You can generate "high-quality first drafts" that can be edited by highlighting a particular section and entering another prompt or using on-screen controls: Canvas can be used as a basic text editor, while Google touts use cases like speeches, essays, blog posts, and reports. Once done, it can be easily exported to Google Docs. You can also use Canvas for coding, with Gemini able to "generate and preview your HTML/React code and other web app prototypes to see a visual representation of your design." These code previews are available on the Gemini web app. For example, say you want to create an email subscription form for your website. You can ask Gemini to generate the HTML for the form and then preview how it will appear and function within your web app. Easily request changes to input fields or add call-to-action buttons, instantly see the updated preview, and then share your creations with others. As you ask Gemini for changes, the preview will be updated. Google frames this as "creating and editing your code and design[ing] in one place, without the hassle of switching between multiple applications." Meanwhile, you can share those live previews via URLs. Canvas is also rolling out globally starting today for Gemini and Gemini Advanced subscribers in all languages. In the coming weeks, Canvas will also be available on mobile.
[25]
Google is giving away Gemini's best paid features for free -- here's the tools you can try now
Google is taking Gemini to the next level and giving users more for free, as the company announced new upgrades today (March 13) aimed to make Gemini even more useful and personal. Upgrades include tailoring the chatbot's responses based on users' search history, enhancing deep research and improving its connectivity with Google's suite of apps. Gemini is getting more personal, which is the most significant update. New personalized responses, powered by Gemini 2.0 Flash Thinking (experimental), let Gemini now reference your Google Search history to deliver more relevant answers. Don't worry, it asks for your permission before diving in. Letting Gemini have access to your search history lets the chatbot tailor its suggestions based on what it knows about your interests. Activating this feature begins in the Gemini app by opting into "Personalization (experimental)." This option is not permanent. If you give it a try and change your mind, you can disconnect it. First introduced in December, Deep Research was designed to help users analyze information, synthesize reports and explore topics more easily with greater speed. Today, Google is rolling out Deep Research powered by Gemini 2.0 Flash Thinking, enabling users to generate detailed multi-page reports on virtually any topic in minutes. This upgrade improves Gemini's ability to plan, search, reason, analyze and report -- essentially becoming a full-fledged AI research assistant. Users can see how Gemini "thinks" as it browses the web, providing greater transparency into how it gathers and interprets information. Best of all, Gemini Deep Research is now free for all users, with Gemini Advanced subscribers getting extended capabilities. The 2.0 Flash Thinking (experimental) model, which enhances Gemini's reasoning, efficiency, and speed, now includes a 1 million token context window for Gemini Advanced users. In other words, Gemini can process and analyze much larger datasets, making it even more capable for complex problem-solving and long-form content generation. Additionally, Gemini can now handle file uploads, allowing users to interact with documents and retrieve insights more effectively. Google is also expanding Gemini's reach within its own ecosystem. The AI assistant can now connect with Calendar, Notes, Tasks, and Photos, enabling users to make complex multi-app requests in a single prompt. For example, you can ask Gemini: "Look up an easy cookie recipe on YouTube, add the ingredients to my shopping list, and find grocery stores that are still open nearby." In the coming weeks, Google Photos will be added to this integration, making it easier to recall past events or organize personal memories through Gemini. Another big win for users is that Gems, the custom AI assistants that specialize in specific tasks, are now rolling out for free. Whether you need a meal planner, language tutor, or workout coach, Gems let you create a personalized AI assistant tailored to your needs. Users can even upload files when creating Gems, giving them even more context and functionality. With these personalization features, Google emphasizes that privacy and user control are built in. Gemini will only use your Search history when you've explicitly enabled it and will always display a clear banner when personalization is active. Users can manage and delete their data at any time. These upgrades are rolling out starting today and will be available to users worldwide in over 40 languages. Gemini's personalization feature is available on the web and will gradually expand to mobile.
[26]
Google Gemini Can Now Use Your Search History to Provide Personalized Responses
Google's Gemini AI product is now able to absorb a user's search history in order to provide more personalized information, Google announced today. The Gemini 2.0 Flash Thinking model is able to connect to Google apps and services to tailor responses based on past searches. Google says that this feature will save time and will provide users with "more precise answers." For now, Gemini is only able to read search history, but in the future, it will be able to connect with other Google apps and services. Gemini's access to search history is opt-in, and it is experimental at this time. In the Gemini app, users can use the model menu to select "Personalization" to connect their Google search history. When making a request, Gemini will then analyze search history to see if it is able to enhance a response. Google says that search history will only be used when its reasoning models decide that it's helpful, and that early testers have found the feature useful for brainstorming and personalized recommendations. Gemini with personalization is available to Gemini and Gemini Advanced subscribers on the web, and will be rolling out on mobile soon.
[27]
Gemini just became the ultimate collaborator -- everything you need to know about this huge new upgrade
Google's Gemini is getting another big upgrade, this time with a focus on collaboration, coding and content creation. Starting today, Gemini users will get access to Canvas, an interactive workspace for refining documents and code, and Audio Overview, a feature that transforms everything from single files to Deep Research reports into podcast-style discussions. Here's a closer look at what these updates bring to the table. Canvas for real-time editing and coding Gemini Canvas is an interactive tool designed to streamline writing and coding. This tool allows users to draft, refine, and edit documents or code in real time. With this feature, Gemini essentially becomes an AI-powered collaborator to help bring stories and projects to life. What you can do with Canvas Instant drafts & edits. Whether you're writing a blog post, speech, or business report, Canvas allows you to quickly generate a high-quality first draft and make instant refinements with Gemini's suggestions. Custom formatting & tone adjustments. Need to make a paragraph more concise, professional, or informal? Simply highlight it and tweak the tone or length as needed. Seamless export to Google Docs. Once you're happy with your content, export it with one click to Google Docs for further collaboration or sharing. Coding in Canvas. Canvas is also a powerful tool for developers. The feature simplifies coding tasks such as: Code generation and debugging. Whether you're creating a Python script, web app, or game, Gemini assists by generating and troubleshooting code within Canvas. Live previews for web apps. If you're working with HTML or React, you can now preview your designs right within Gemini. For instance, you can build an email subscription form, tweak its elements, and instantly see how it looks. Iterative collaboration. Need to modify input fields, buttons, or functionality? Make changes on the fly with Gemini's interactive suggestions. Audio Overview for audible learning For those who prefer to absorb information by listening rather than reading, meet Audio Overview. This feature turns your uploaded documents, slides, and research reports into engaging AI-generated conversations. Similar to the Audio Overview feature in NotebookLM, this feature is available on the Gemini app and on the web platform. How It Works AI hosts. Gemini creates a podcast-style discussion between two AI voices that summarize your documents, highlight key insights, and engage in back-and-forth analysis. Enhanced learning & productivity. Whether you need to digest class notes, research papers, or lengthy email threads, Audio Overview helps break down complex information into digestible conversations. On-the-go accessibility. Listen via the Gemini mobile app or web and download audio files for offline playback. Gemini's expanding role as an AI collaborator These new tools mark a significant step forward for Google's AI ecosystem, positioning Gemini as a productivity partner and useful collaborator. Whether writing, coding, or absorbing information, Gemini now offers more ways to assist in real-time. Canvas and Audio Overvew are rolling out globally today for Gemini and Gemini Advanced subscribers in all languages where Gemini Apps are available. More from Tom's Guide
[28]
Gemini Deep Research just got even smarter and it's now free for everyone to try - here's why you should give it a go
Gemini Deep Research is now free to try, with Advanced users getting expanded access Google has today announced that its fantastic Gemini Deep Research tool is getting even smarter with a Flash 2.0 Thinking Experimental upgrade. Not only is Gemini's research tool going to be even better, but it's also becoming completely free to try, allowing users worldwide to tap into the smarter AI analysis software. Google says the upgrade "enhances Gemini's capabilities across all research stages -- from planning and searching to reasoning, analyzing and reporting -- creating higher-quality, multi-page reports that are more detailed and insightful." You'll also see Gemini show its thoughts while it browses the web, allowing you to monitor exactly what is being researched in real-time. Gemini users can try Deep Research for free, and Gemini Advanced users get expanded access to Deep Research to use the tool more frequently. The personal AI assistant is similar to ChatGPT's tool of the same name, and since launching in December has proven to be one of the best AI agents on the market. We've thoroughly tested Deep Research since it launched a few months ago and it has been seriously impressive to see the evolution in AI models capable of doing research without human supervision. Gemini's Deep Research has impressed thanks to its ease of use and is great at casual queries like buying guide advice such as "Find me the best affordable running shoes that aren't ugly, and explain why they're good." ChatGPT's Deep Research is currently only available to Plus subscribers who pay $20 a month, so Google now offering Gemini's version for free is a huge win for the tech giant. You really don't have anything to lose, give Gemini Deep Research a try and see what the fuss is about, after all it's now completely free. The upgrade to Deep Research isn't the only new addition coming to Gemini today, Google has also announced that the AI chatbot will now have access to your Search history should you choose to share it. All of these new features are rolling out from today, so keep an eye out on your Gemini client and take advantage of all the newly announced features.
[29]
Google just gave Gemini a superpower by allowing it to access your Search history - here's why I'm excited and also a little terrified
Gemini can now access your Search history, which might be the chatbot's biggest advantage, completely overhauling Google's AI experience. Launching today as an experimental feature on the web and gradually rolling out on mobile, the new update powered by Gemini's Flash 2.0 model gives a whole new meaning to personal context. Gemini can now easily access your Google Search history and use that information to provide even better results than before. Google showed multiple examples of the new feature in action, such as one prompt that asked Gemini, "Where should I go on vacation this summer?" to which the AI used the user's Search history and responded, "Considering your recent searches for places like Hawaii and the Maldives, you seem to enjoy tropical destinations. You also looked into family-friendly trips to Chicago, Seattle, and Kyoto, suggesting an interest in city and international travel with your family. Your searches for Yosemite and Antelope Canyon point towards an appreciation for nature and unique landscapes." Gemini then gave a full breakdown of vacation destinations that may be suitable. Google is the world's most used search engine, and tapping into its user awareness and knowledge of how people use the web is an absolute game-changer for Gemini. No other AI chatbot has access to Google's extensive user data, and if it can incorporate Search queries efficiently into Gemini to personalize the AI experience, then this could be an absolute winner. Google says, "Gemini with personalization will be able to use your Google apps, starting with your Search history, to deliver contextually relevant responses that are adapted to your individual interests." Now, as one of TechRadar's resident AI experts, this fills me with glee, the ability to get even better Gemini results and get even closer to the AI I've always dreamed of: a true personal assistant in my pocket. That said, I'm not naive, and I know that reading the headline of this article might strike fear in the average consumer. After all, we don't want AI to know even more about our lives, right? Google knows the idea of incorporating your Search history into AI is going to set off some alarm bells, so the company has made it very easy to disconnect your history at any time, and there's a clear notice asking for permission before connecting your information to the chatbot. Gemini will also only access your information when you select the AI model that includes personalization, giving users an easy way to switch off the Search history access whenever they choose to do so. I'm incredibly excited about Search history being incorporated into Gemini, and I think it gives Google's AI a real selling point over its competitors. I've also come to terms with the fact that the perfect AI personal assistant I crave requires more and more of my data, and while I see the fear of giving more of your data to companies, I need to accept that to achieve my dream I need to be more lenient on what I allow tech to access. Make no mistakes about it. This Gemini update is a massive deal, and it really could pave the way to a future where Google's offering is in a realm of its own, such as the personal context king. While Apple delays Apple Intelligence-powered Siri, Google is flexing its muscles as the leader in smartphone AI, and adding Search history just elevates Gemini even more.
[30]
Gemini 2.0 Flash Thinking now has memory and Google apps integration
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More A few months ago, Google added access to reasoning modes to its Gemini AI chatbot. Now, it's expanded the reach of Gemini 2.0 Flash Thinking Experimental to other features of the chat experience as it doubles down on context-filled responses. The company announced it's making Gemini more personal, connected and helpful. It's also making its version of Deep Research, which searches the Internet for information, more widely available to Gemini users. Deep Research will now be backed by Gemini 2.0 Flash Thinking Experimental. Google said in a blog post that, by adding the power of Flash Thinking, Deep Research can now give users "a real-time look into how it's going about solving your research tasks." The company said this combination will improve the quality of reports done through Deep Research by providing more details and insights. Before this update, Gemini 1.5 Pro powered Deep Research and was only available on the $20-a-month Google One AI Premium plan. However, VentureBeat's Carl Franzen found even this now less-powerful version to be a helpful research assistant. A more personal Gemini Gemini 2.0 Flash Thinking Experimental will also power a new capability called personalization. Personalization is precisely that: Responses will be more tailored to the user by referencing previous conversations or searches. To enable this level of personalization, Gemini connects to users' Google apps and services, including Search and Photos. Google emphasized that it will use information from your Google apps only with permission. "In the coming months, Gemini will expand its ability to understand you by connecting with other Google apps and services, including Photos and YouTube," Dave Citron, senior director, product management, Gemini app, said in a blog post. "This will enable Gemini to provide more personalized insights, drawing from a broader understanding of your activities and preferences to deliver responses that truly resonate with you." Since Gemini 2.0 Flash Thinking Experimental is built into the personalization feature, users can see an outline of which data sources the model is tapping to answer queries or to complete requests. Gemini Advanced users can toggle other preferences they want the chatbot to remember, such as instructing it to refer to past conversations or reminding it of dietary restrictions. This allows Gemini to offer more natural and relevant responses. Of course, Google is not the only company that recognizes the importance of personalized and relevant responses. In November, Anthropic launched its Styles feature, which allows people to customize how Claude speaks to them. More connected apps As personalization requires access to more data about the user, think of it as RAG, but for a Gemini user rather than an entire organization, with Google connecting more of its services to Gemini 2.0 Flash Thinking Experimental. The model can tap apps like Calendar, Notes, Tasks and Photos. "With this thinking model, Gemini can better tackle complex requests like prompts that involve multiple apps, because the new model can better reason over the overall request, break it down into distinct steps, and assess its own progress as it goes," Citron said. Google said that in a couple of weeks, Gemini will be able to look at photos in Google Photos and answer questions based on users' images. It can create travel itineraries based on pictures from recent trips, and recall information like the expiration date for a driver's license, or whether you happen to have taken a photo of milk in the store. Integrating applications to provide more context to chatbot responses has been a big trend for AI companies. This has translated to giving chatbots access to developer environments or emails in the enterprise space. ChatGPT can open most IDEs so developers can bring their code from VSCode and query ChatGPT about it. Google's coding helper, Code Assist, also connects to IDEs. Google's increasing app and service integration and personalizing Gemini underscore the importance of context and data in making these chatbots more useful, even if the query is just asking for a restaurant recommendation.
[31]
Gemini just got a huge writing and coding upgrade - Google keeps making its AI better and ChatGPT should be worried
Google is today launching a new upgrade for Gemini called Canvas that allows you to refine documents and code straight from within its AI chatbot. Canvas is a 'new interactive space' that is 'designed to make creating, refining, and sharing work easy'. Think of Canvas as a writing tool akin to ChatGPT Canvas or Apple Intelligence Writing Tools but built into Gemini with easy exporting to Google Docs. Canvas can generate written drafts, change the tone of voice, and suggest edits directly from within Gemini. The tool can also streamline the coding process by quickly 'transforming your coding ideas into working prototypes for web apps, Python scripts, games, simulations and other interactive apps.' That might not sound like the most exciting AI upgrade for most of us, but it opens up even more possibilities with Gemini, which is only a good thing, and not even a week on from Google's last major AI updates. Just last week Google added Search history to Gemini, allowing users to get even more personalized AI responses based on how they've previously used Google Search. Additionally, Deep Research, Gemini's data analysis and reporting tool was made free alongside Gems, a custom chatbot builder, perfect for creating specific use cases like a counseling bot with AI. Gemini updates are coming thick and fast, ChatGPT should be worried Google continues to add huge Gemini upgrades almost weekly, with the AI chatbot quickly taking over ChatGPT as my favorite AI chatbot. Last week's Deep Research upgrade to 2.0 Flash which also included free access without a premium plan is fantastic, and I've used Deep Research multiple times this week without paying a dime. It's an excellent tool for getting in-depth info, perfect for work or the sports nerd like me who wants to know about the best fantasy football assets. I don't use AI writing tools so Canvas isn't that appealing to me, but I'm excited by the cadence of Gemini updates and how focused Google is on building the best AI chatbot possible. Last week's Search history upgrade could make Gemini the best AI tool on the market, and while it hasn't rolled out to me yet, I'm looking forward to seeing how it improves the Google AI experience. Not only has Google announced Gemini Canvas today, but it's also upgrading Deep Research to add Audio Overview functionality from NotebookLM, allowing users to create podcasts from the research reports. While Google's Gemini updates might not always grab the headlines, the constant push to improve the AI tool is worth writing home about. Gemini is one of the best AI chatbots on the market, and it just keeps getting better. You might also like
[32]
Get ready for Audio Overview in Google Gemini, I've used it in Notebook LM and it's a complete game changer
Audio Overview is coming to Google's AI chatbot Gemini, and I think it will change the way we use it for good. You can use Audio Overview to turn documents, slides, and even Deep Research reports into easy-to-listen-to podcasts. The first time I tried Audio Overview I was blown away by how good it was. The podcasts it creates are essentially 10-minute-long shows narrated by two AI hosts who talk about whatever subject you've fed them via Google documents, PDFs, or even YouTube videos. The point of Audio Overview is to speed up the learning process for students. So, instead of having to read all those books, or watch all those YouTube videos yourself, you can get AI to do it for you and then get it to tell you all the important bits in a short information blast, but as if you were listening to a podcast. Getting in the mix Audio Overview first appeared as part of Google's NotebookLM research tool. It was particularly favored by students who didn't like to read very much, but the technology for creating its AI podcasts worked way better than it had any right to and obviously had implications for projects far beyond the world of education. Rather than sounding like two boring AI robots discussing a subject academically, the podcast hosts sound as if they were two real humans talking about a subject they both really cared about, with a lot of dynamic back and forth. I quickly realized there was scope for creating podcasts about pretty much anything using Audio Overview, and I've been using it ever since. Now we can use it with Deep Research reports, it will be even better. Gemini integration NotebookLM was already free to use, but having Audio Overviews integrated into Gemini just makes them easier to access. Audio Overview is starting to roll out today to Gemini and Gemini Advanced subscribers, globally in English, with more languages coming soon. They work in Gemini by simply uploading documents into the prompt bar and then choosing Generate Audio Overview from the suggestion chip that pops up. Audio Overviews work in both the web and mobile app versions of Gemini. Go to gemini.google.com to see if they're available to you yet. You may also like
[33]
Google Gemini can now tap into your search history
Table of Contents Table of Contents Search history drives Gemini's answers Integrating Gemini within more apps Google has announced a wide range of upgrades for its Gemini assistant today. To start, the new Gemini 2.0 Flash Thinking Experimental model now allows file upload as an input, alongside getting a speed boost. The more notable update, however, is a new opt-in feature called Personalization. In a nutshell, when you put a query before Gemini, it takes a peek at your Google Search history and offers a tailored response. Recommended Videos Down the road, Personalization will expand beyond Search. Google says Gemini will also tap into other ecosystem apps such as Photos and YouTube to offer more personalized responses. It's somewhat like Apple's delayed AI features for Siri, which even prompted the company to pull its ads. Please enable Javascript to view this content Search history drives Gemini's answers Starting with the Google Search integration, if you ask the AI assistant about a few nearby cafe recommendations, it will check whether you have previously searched for that information. If so, Gemini will try to include that information (and the names you came across) in its response. "This will enable Gemini to provide more personalized insights, drawing from a broader understanding of your activities and preferences to deliver responses that truly resonate with you," says Google in a blog post. The new Personalization feature is tied to the Gemini 2.0 Flash Thinking Experimental model, and will be available to free as well as paid users on a Gemini Advanced subscription. Rollout begins today, startling with the web version and will soon reach the mobile client, too. Google says the Personalization facility currently supports more than 40 languages and it will be expanded to users across the globe. The feature certainly sounds like a privacy scare, but it's an opt-in facility with the following guardrails: It will only work when users have connected Gemini with their Search history, enabled Personalization, and activated the Web & App Activity system. When Personalization is active in Gemini, a banner in the chat window will let users quickly disconnect their Search history. It will explicitly disclose the details of user data, such as saved info, previous chats or Search history, currently being used by Gemini. To make the responses even more relevant, users can tell Gemini to reference their past chats, as well. This feature has been exclusive to Advanced subscribers so far, but it will be extended to free users worldwide in the coming weeks. Integrating Gemini within more apps Gemini has the ability to interact with other applications -- Google's as well as third-party -- using an "apps" system, previously known as extensions. It's a neat convenience, as it allows users to get work done across different apps without even launching them. Google is now bringing access to these apps within the Gemini 2.0 Flash Thinking Experimental model. Moroever, the pool of apps is being expanded to Google Photos and Notes, as well. Gemini already has access to YouTube, Maps, Google Flights, Google Hotels, Keep, Drive, Docs, Calendar, and Gmail. Users can also enable the apps system for third-party services such as WhatsApp and Spotify, as well, by linking with their Google account. Aside from pulling information and getting tasks done across different apps, it also lets users execute multiple-step workflows. For example, with a single voice command, users can ask Gemini to look up a recipe on YouTube, add the ingredients to their notes, and find a nearby grocery shop, as well. In a few weeks, Google Photos will also be added to the list of apps that Gemini can access. "With this thinking model, Gemini can better tackle complex requests like prompts that involve multiple apps, because the new model can better reason over the overall request, break it down into distinct steps, and assess its own progress as it goes," explains Google. Moreover, Google is also expanding the context window limit to 1 million tokens for the Gemini 2.0 Flash Thinking Experimental model. AI tools such as Gemini break down words into tokens, with an average English language word translating to roughly 1.3 tokens. The larger the token context window, the bigger the size of input allowed. With the increased context window, Gemini 2.0 Flash Thinking Experimental can now process much bigger chunks of information and solve complex problems.
[34]
Google is giving free access to two of Gemini's best AI features
Table of Contents Table of Contents The best of Gemini as an AI agent Creating custom versions of Gemini Google's Gemini AI has steadily made its way to the best of its software suite, from native Android integrations to interoperability with Workspace apps such as Gmail and Docs. However, some of the most advanced Gemini features have remained locked behind a subscription paywall. That changes today. Google has announced that Gemini Deep Research will now be available for all users to try, alongside the ability to create custom Gem bots. You no longer need a Gemini Advanced (or Google One AI Premium) subscription to use the aforementioned tools. Recommended Videos The best of Gemini as an AI agent Deep Research is an agentic tool that takes over the task of web research, saving users the hassle of visiting one web page after another, looking for relevant information. With Deep Research, you can simply put a natural language query as input, and also specify the source, if needed. Deep Research will break down the query in numerous stages and then seeks a final plan approval before it jumps into action. After completing the research work, which usually takes a few minutes, it presents a neatly formatted document, divided across headlines, tables, bullet points, and other relevant stylistic elements. It's a fantastic tool for conducting research as a student, journalist, finance planner, academic, and more. I have extensively used this feature for digging into scientific papers, and it has been so helpful that I pay for a Gemini Advanced subscription solely to access Deep Research. "Now anyone will be able to try it across the globe and in 45+ languages," writes Dave Citron, Senior Director of Product Management for the Gemini app. Aside from giving free access to all users, Google is also upgrading the underlying infrastructure to the more advanced Gemini 2.0 Flash Thinking Experimental AI model. Do keep in mind that you won't get unlimited access, since it's a very compute-intensive process. Google says free users can try Deep Research a "few times per month." The strategy is not too different compared to what Perplexity has to offer with its own Deep Research tool. OpenAI chief Sam Altman has also confirmed that free ChatGPT users will also be able to launch Deep Research queries twice a month. Creating custom versions of Gemini Another freebie announced by Google today is Gems. These are essentially custom chatbots, which can be trained to perform a specific task. From drafting detailed email responses with a simple "yes" or "no" as input to a coding assistant, users can create one that best suits their workflow. The best part is that you don't need any coding knowledge to create a personalized Gem for your daily use, as all the operational instructions can be given in natural language sentences. So far, the ability to create Gems has been limited to paying users. Now, Gems are rolling out widely to all Gemini users, without any subscription requirement. Gems are available for free in the Gemini mobile app, but to create them, you need to visit the Gemini desktop client. The behavior of Gems can also be customized later on. Just like the regular Gemini assistant, Gems can also process data based on files uploaded by users. I have created a handful of Gems, which take the drudgery out of boring tasks and save me a lot of time.
[35]
Google's AI chatbot will use your search history to get more personal
The tech giant announced on Thursday that it is launching an experimental version of Gemini with personalization, which can connect with other Google apps and services to "provide responses that are uniquely insightful and directly address your needs." Gemini can now use search history to inform its responses, Google said, adding that the chatbot will get access to other platforms such as YouTube and Photos in the coming months. Users have to give Gemini permission to reference history from Search, and can disconnect it anytime. "We'll only use your Search history when our advanced reasoning models determine that it's actually helpful," Google said, adding that it's continuing to get feedback from early testers. The personalization feature can be found through Gemini's model drop-down menu. After prompting the chatbot, it "will analyze it and determine if your Search history can enhance the response," Google said. Gemini and Gemini Advanced subscribers can access the experimental personalization feature via the web starting on Thursday, and it is rolling out on mobile, too. Gemini with personalization will be available in more than 40 languages across most countries where Google operates. The chatbot is powered by Google's Gemini 2.0 Flash Thinking Experimental model, which it launched in its Gemini app last month. The model is trained to "strengthen its reasoning capabilities" by breaking down prompts step-by-step and showing users its "thought process" to see how it came to its response. In December, Google introduced Gemini 2.0, which "will enable us to build new AI agents that bring us closer to our vision of a universal assistant," Google chief executive Sundar Pichai said. Google also said on Thursday that it was giving the 2.0 Flash Thinking Experimental model a longer context window to process larger amounts of information, and new features such as uploading files. The company added that it is upgrading its Deep Research feature with the model. The feature, which Google is making available to regular Gemini users at no cost, can compile multi-page research reports in minutes after browsing hundreds of webpages. The Gemini app is also getting Google's Gems feature, which now allows all users to build their own personal AI expert on topics such as languages and math at no cost.
[36]
Gemini's new feature might make it your new favorite group project partner
Google has released new content for its Gemini assistant called Canvas -- a split-screen feature that lets you chat to Gemini on the left and see your changes appear in real-time on the right. The idea is to make editing and iteration a smoother experience -- instead of scrolling up and down the chat to copy sections of output you're not happy with, you can just highlight the text in question on the right and tell Gemini what to change. The assistant will then edit the specified section and update the document, rather than generating a whole new version or spitting out additional paragraphs you need to splice together yourself. Asking an LLM like Gemini to make revisions to its responses can be a bit of a chore, so this will hopefully make the process less painful. Recommended Videos Canvas also works with programming projects, allowing you to view code on the right and chat with Gemini to explain, revise, and debug it on the left. It can also display your HTML or React code as a visual representation of your software, allowing you to preview what your email subscription form might look like, for example. When you request changes, the preview will update, allowing you to try out different ideas quickly and efficiently. To try out these new features, you'll need to be a Gemini or Gemini Advanced subscriber and click the Canvas button in the prompt bar. Google is marketing these updates as features for "collaboration" but just to be clear -- it doesn't mean collaboration with other people. The features are designed for you to collaborate with Gemini and, according to Google, "if you want to collaborate with others on the content you just made, you can export it to Google Docs with a click." The update also includes Audio Overview, a feature from NotebookLM that essentially transforms documents into podcasts. It's similar to any summary and analysis generation tool in its purpose, but it presents the information in an audio format with two AI hosts holding a podcast-style discussion. The feature has been popular with NotebookLM users who want to multitask while consuming information. To use it, upload your documents to Gemini and click the suggestion chip that appears.
[37]
Gemini gets personal, with tailored help from your Google apps
With Gemini, we're creating a personal AI assistant -- one that doesn't just answer general questions, but understands you. Today, we're taking another step toward this goal with the launch of Gemini with personalization, a new experimental capability. Powered by our experimental Gemini 2.0 Flash Thinking model, personalization allows Gemini to connect with your Google apps and services, starting with Search, to provide responses that are uniquely insightful and directly address your needs. With your permission, Gemini can now tailor its responses based on your past searches, saving you time and delivering more precise answers. In the coming months, Gemini will expand its ability to understand you by connecting with other Google apps and services, including Photos and YouTube. This will enable Gemini to provide more personalized insights, drawing from a broader understanding of your activities and preferences to deliver responses that truly resonate with you. Here's how it works: In Gemini Apps, you can select "Personalization (experimental)" from the model drop-down menu to connect to your Search history. Then, when you enter a prompt, Gemini will analyze it and determine if your Search history can enhance the response. You're in control of your info and can easily disconnect Gemini from your Search history.
[38]
New Gemini app features, available to try at no cost
Summaries were generated by Google AI. Generative AI is experimental. Today, we're making big upgrades to the performance and availability of our most popular Gemini features, and adding a new feature that will make Gemini even more personal and helpful. Today, we're starting to roll out an upgraded version of our 2.0 Flash Thinking Experimental model that supports added features like file upload. This model, which is trained to break down prompts into a series of steps to strengthen its reasoning capabilities and deliver better responses, now has better efficiency and speed. And now, Gemini Advanced users will have access to a 1M token context window with 2.0 Flash Thinking Experimental, enabling you to solve more complex problems than ever before by exploring and analyzing large amounts of information. In December, we pioneered a new product category in Gemini with Deep Research. It saves you hours of time as your personal AI research assistant, searching and synthesizing information from across the web in just minutes, and helps you discover sources from across the web you may not have otherwise found. Today, we're upgrading Deep Research with Gemini 2.0 Flash Thinking Experimental. This enhances Gemini's capabilities across all research stages -- from planning and searching to reasoning, analyzing and reporting -- creating higher-quality, multi-page reports that are more detailed and insightful. Gemini now shows its thoughts while it browses the web, giving you a real-time look into how it's going about solving your research task. By pairing Deep Research with this new model, we expect the quality of reports to keep getting even better.
[39]
Google Has Dropped the Paywall for These Gemini Features
Google is announcing a slew of new Gemini features today, this time aimed squarely at its free users. Features that were previously only available with the $20 per month Advanced plan will now be accessible to the public. Gems Gems are Gemini's little AI helpers that you can create for any task. You can start with pre-made ones like Google's Career guide. But you can create a Gem for any purpose. You can create one for repeated tasks, or to help you research a topic with very specific prompts. Previously, the Gems feature was exclusive to Gemini Advanced users, but now Google is making them to available to all users. You'll find Gems in the sidebar, where you can easily get started with premade Gems. Deep Research Google, too, has a Deep Research feature. But unlike with Perplexity, it was previously behind a paywall. Now, Google is making its Deep Research feature available for free to all users. Deep Research is an AI feature where the AI model takes some time to think through your question using a reasoning model, and then goes out in the open web, collating sources, figuring issues out in depth, and then presenting you with a detailed report instead of the simple bullet-point answers that regular AI chatbots provide. In addition to making Deep Research free, Google is also adding its Gemini 2.0 Flash Thinking Experimental model to Deep Research. The new thinking model will help with the every step of research, including planning, reasoning, analyzing, and reporting. Deep Research will be available in more than 45 languages, and can be accessed using the drop-down menu in the Prompt box. In material shared with press, Google wasn't clear how many Deep Research queries free users get per day, though the company did promise expanded access for Gemini Advanced users. Updates to the Gemini 2.0 Flash model The Gemini 2.0 Flash Thinking (Experimental) model is also getting an upgrade. You can now upload files for it to use while answering prompts, and Google says it has improved the model's performance and introduced advanced reasoning capabilities. Gemini Advanced users will also now have access to a 1 million token context window, enabling users to solve more complex problems. Your Google Search history comes to Gemini Google is adding a new experimental feature called Gemini with personalization, powered by the Gemini 2.0 Flash Thinking model. This allows Gemini to connect with your Google apps and services. The company is starting with Google Search, but will expand to Photos and YouTube in coming months. This means that Gemini will have more context about you, based on your Google Searches, but only if you choose to enable the Personalization (experimental) model from the model picker drop-down menu. Google also says that the new feature will only use your search history when Gemini's advanced models deem that it's needed. More powerful connections with Google apps With the 2.0 Flash Thinking model, Gemini is now better able to tackle complex requests that involve multiple Google services, including Calendar, Notes, Tasks, and now Photos. According to Google, you can use a single prompt like "check my Calendar to find that gelato place Ezra and I went to back in May, save its address to my notes, and text it to Lauren and suggest we go there" instead of jumping between multiple apps, or asking three different questions to Gemini.
[40]
Why Google Gemini Wants Your Search History (and Why I Won't Be Sharing Mine)
On Thursday, Google rolled out a number of previously-paywalled Gemini features to free users. You can now use Gemini custom chatbots, which the company calls "Gems"; Deep Research, which runs AI models that "think" through each step of a problem; and upload files to Gemini 2.0 Flash Thinking, whether you pay for Google's AI services or not. But that's not all: The company also introduced new experimental feature for Gemini -- Gemini with personalization. This feature, which runs on Google's Gemini 2.0 Flash Thinking model, connects Gemini to your Google apps and services, with the goal of offering you a more personal AI assistant. The idea is, by connecting your Google Account's information to Gemini, it'll know more about you and will be able to deliver more informed results tailored to your personal tastes. It's certainly a step in the direction that big tech companies are advertising AI to be. But in order to work, you need to connect your search history to Gemini. That's a lot of trust to put into Google's AI service, and I imagine a tricky decision for anyone who is concerned about the amount of data we're feeding these AI tools. What can Gemini's Personalization model do? Google offers a few examples of how this new service might improve your experience with Gemini. You might ask the bot where you should go on vacation, and rather than pull from a series of sources about where other people like to go on vacation, the bot could, theoretically, use your past search queries to focus on a trip it thinks you would like. Maybe you've put together a bit of a vision board about heading to the Bahamas or Saint Lucia, and the bot would gather searches related to tropical vacations. Or maybe you'd ask the bot for suggestions for a new hobby and see results based on the types of things you searched for in the past. I understand the vision Google's going for here: Rather than have a bot that answers queries the same for everyone, why not have each user's bot provide answers tailored to their likes and dislikes? That said, it does make me wonder: If the user is already searching for things like vacation spots and new hobbies, wouldn't they be able to choose for themselves where they'd like to go, or what activity they'd like to take up? If I'm searching a lot about jogging, and I ask the bot what hobbies I should take up, I'm not going to be surprised when Gemini returns results for On sneakers and a local running club. For Google's part, this isn't necessarily some sneaky tactic. In order for you to use the feature, you'll need to opt in to connecting your search history to Gemini. That's actually surprising to me, and mildly refreshing. At least Google isn't making opt-out the default here. Because the model is a "thinking" model, you'll see the entire train of thought as part of the results. As such, Google says you'll be able to see the personal information Gemini used to generate its answer, including saved info, past conversations, or your search history. In addition, Gemini won't look at your search history unless you're specifically using this experimental personalization feature. (You also need to have Google's Web & App Activity setting turned on.) All that to say, it's not like using this feature means Gemini will scan your search history every time you use it. If you use the standard Gemini 2.0 Flash model, it won't pull from this personal data with its answer -- only if you switch back to "Personalization." Should you connect your search history to Gemini? Here's what I'll say: I'm not connecting my search history to Gemini -- not yet, anyway. At this time, the feature is experimental, so it isn't the complete vision that Google has in store for it. (The company has plans to connect Photos and YouTube data in the future, for example.) But even if the feature was fully realized, I'm just not comfortable with connecting my personal search history to Google's AI. Don't get me wrong: I know Google already has access to my search history (though disabling Web & App Activity should mitigate some of that data leaking). It's not really about that. To me, I don't feel the need to train Google's AI on my search history, which is what is happening here. It's a neat idea to give users more personalized results from AI bots, but by opting into this feature, I'm providing Google free training for Gemini using my personal information. In fact, by requiring Web & App Activity to be enabled, Google is asking for you to share this data with both Gemini and Google as a whole. Google might have the best of privacy intentions here for all we know, but even still, I'm living by another AI tenant with this decision: don't share private information with AI. If you wouldn't want a human reviewer at Google seeing what you're sharing with Gemini, you probably shouldn't share it in the first place. Traditionally, I've referenced this rule for things like proprietary company information or deeply personal information, but search history can also be deeply private. Do you really need Gemini (or a human reviewer) seeing everything you searched for, just to attempt to make your Gemini results a bit more personal? Those results might be totally inaccurate, anyway. How to use Gemini's Personalization model If you think those tradeoffs are worth the potential benefits of Gemini's Personalization model, here's how to give it a try. Open up Gemini, then choose "Personalization (Experimental)" from the drop down. Here, Google will present you with a pop-up, where you'll need to connect your search history to Gemini. If you're good with that, choose Connect now.
[41]
Gemini Continues Integration of More Google Apps
We may earn a commission when you click links to retailers and purchase goods. More info. In addition to news that Gemini can now be connected to your Google Search history for more personalized responses, Google is moving to connect more apps and services to the AI. This ultimately leads to more tasks able to be accomplished with a single query, which isn't a bad thing. Google announced that Gemini is getting connected to apps like Calendar, Notes, and Tasks inside of 2.0 Flash Thinking (Experimental), allowing you, the user, to ask more complex questions and requests of Gemini. For example, you can ask Gemini to, "Look up an easy cookie recipe on YouTube, add the ingredients to my shopping list and find me grocery stores that are still open nearby." While the aforementioned apps are getting injected this week, Google says that Photos will be in the coming weeks. When it is available, you can ask questions based on your library of photos. For example, you can ask Gemini to recall information, such as when your driver's license expires. That could be helpful! Also announced, Google is opening up Gems to everyone. If you aren't familiar, Gems let you customize your own AI expert in any given topic. You can create a Gem for translating languages, planning your meals, or coaching you at math. On desktop, go to Gems Manager, write instructions, give it a name, then chat with it whenever you want. You can also upload files to a Gem, ensuring it has all of the relevant information it might need when helping you. For those who utilize Deep Research, you'll be pleased to learn that it is being upgraded with Gemini 2.0 Flash Thinking Experimental. This means enhancements across all stages of the research process. And in addition, "Gemini now shows its thoughts while it browses the web, giving you a real-time look into how it's going about solving your research task." For a full breakdown on what Google is adding to Gemini, follow the link below.
[42]
Google Gemini introduces collaborative canvas and podcast-like audio overviews
Google Gemini introduces collaborative canvas and podcast-like audio overviews Google LLC is adding features to its artificial intelligence Gemini chatbot today that will allow it to collaborate with users in a new interactive canvas for documents and code. "Canvas simplifies the entire coding process, allowing you to focus on creating, editing and sharing your code and design in one place, without the hassle of switching between multiple applications," said Dave Citron, senior director of product management for Gemini apps at Google. When activating Canvas, once users prompt Gemini to generate code by asking it to begin writing an app or document, it will display changes in a sidebar in real time. For document editing, this means users can quickly and easily adjust their text directly in the document without needing to "talk" to the chatbot conversationally each time they want a change. This style of editing can be tedious and slow, making iterating through a document plodding when asking, "Please change the sentence of the second paragraph." Instead with Canvas, a user can adjust the sentence directly. Sentences and paragraphs can also be highlighted, allowing users to trigger Gemini with a prompt to modify them. According to Google, this will permit users to collaborate deeply with more nuanced editing decisions with text - including both research and fine-tuning their documents. Once a user is done editing a document in Canvas they can easily export it into Google Docs where they can continue collaborating on their text with their coworkers. For code, developers can ask Gemini to write code that will also appear in the sidebar, where they can edit alongside Gemini. For web apps, it can visualize changes in HTML, CSS and React JavaScript, allowing software engineers to collaborate with the chatbot. As the chatbot changes the web app, the user can interact with the app in the canvas including click buttons, watch animations and play around with the user interface. For example, it could be used to make a simple simulation of the solar system and include statistics for each of the planets. Web apps generated by the Gemini app can run from relatively simple to extremely complex, depending on the amount of time and fine-tuning the developer wants to spend talking to the AI. Advanced developers and beginners with little or no experience can create complicated web applications in only a few minutes of conversation with the chatbot. During a presentation led by Google, Citron said that this is just another step in the company's work toward making the Gemini app more "agentic." "In general, this Canvas ability is yet another part of making the Gemini app increasingly agentic," Citron explained. "And by agentic, I mean this idea that AI is working on your behalf to get things done and not just simple things like turning on the lights, but more and more incredible things that I didn't even think was possible with AI until a couple of years ago." Canvas is available today to all users of Gemini globally. Bringing audio overviews to Gemini Audio Overview transforms documents, text and extensive notes or reports from Gemini's Deep Research tool into an engaging, podcast-like audio discussion featuring two AI-voiced speakers. In this format, the speakers engage in a witty back-and-forth, reminiscent of a talk show, as they summarize the topic, connect key points and share light-hearted banter. This capability was introduced by Google in 2024 as part of NotebookLM, the company's AI-powered note-taking and research assistant. It functions similarly to the research assistant, enabling users to listen to the audio while their hands and eyes are occupied with other tasks, such as driving or doing chores, allowing them to stay informed with AI-generated information. Google announced that the Audio Overview feature is now available globally to Gemini and Gemini Advanced subscribers, initially in English, with plans to roll out additional languages in the coming weeks.
[43]
Google's Gemini Gets Two of Its Coolest New Features: Canvas and...
We may earn a commission when you click links to retailers and purchase goods. More info. The list of Gemini features expanded this week with the introduction of Canvas, a collaborative effort from Gemini to help you with documents in real-time, as well as Audio Overviews that can turn documents or research into podcast-style conversations to listen to. For Canvas, Google has added this tool to Gemini as a collaborative feature, where you and Gemini work together to create something, edit something, etc. You could take notes during a class, for example, and then upload that document to Gemini while asking it to write a draft of a speech based off those notes. Once that speech is ready for you to view, you'll then be able to highlight sections to adjust length, change tone, or have Gemini suggest further edits. Google also offered an example of someone learning to code and having Canvas create a simple tic-tac-toe game that you could build from or learn off of. In this Canvas project, you could not only view the code, but also preview the game that Gemini made, with explanations along the way to help you continue learning. To get started, you'll open Gemini on the web and look for the "Canvas" button within the "Ask Gemini" box. Click on that to activate Canvas before you make your query. For the new Audio Overviews, Google is giving you the power to upload documents, slides, or Deep Research reports and have them turned into podcast-like audio clips. This idea was introduced through NotebookLM almost a year ago, with Google showcasing the true magic of AI in some instances, where they could take massive PDF files and almost instantly turn them into podcasts with two hosts discussing the subject. Now, this appears to be built into Gemini both on the web and mobile. You could upload those same school notes we talked about above and have them turned into an audio overview for later listening. You could also just ask Gemini to create an audio overview of a deep research subject you may be working on to get insights from Gemini in audio format. Canvas and Audio Overviews are rolling out globally today for Gemini and Gemini Advanced subscribers.
[44]
Google Brings New Features to NotebookLM and Gemini
Google has been announcing major updates across all its AI products. Tech giant Google rolled out NotebookLM's new mind map feature on Wednesday, allowing users to see a visual summary from any source document. The topics and their related ideas are represented as a branching diagram. Simon Tokumine, director of product management at Google Labs, took to X to make this announcement. As per the official documentation, one can use the mind maps feature when trying to understand the big picture of the source material, explore unfamiliar information, connect the dots, and get a structure of the information. To generate a mind map, one needs to open an existing notebook (or create a new one). Once the source is analysed, a "Mind Map" button will appear, generating a mind map note to see the visual summary. Furthermore, one can interact with the Mind Map by zooming in/out, scrolling, expanding/collapsing branches, and clicking on the nodes to ask questions in NotebookLM chat. The Mind Map can also be downloaded and shared as an image file. The feature will be available globally in the next seven days and should be available in some countries presently. AIM was able to access the feature, and it worked as expected. Besides this, the company also added a new feature to Gemini, Canvas, that provides an interactive space for refining documents and code. It also enabled support for Audio Overview in Gemini. A user needs to select 'Canvas' in the prompt bar and start writing/editing documents, or code, with changes appearing in real-time. The feature also supports previews for HTML/React code. To collaborate, one can export the output to Google Docs and share it with others.
[45]
Google's Gemini Wants Your Search History for One Major Reason
We may earn a commission when you click links to retailers and purchase goods. More info. If AI is ever going to become the AI assistant of the future, it probably needs to be more personal, which means it'll need to know almost everything about you, with access to everything you do and see on the web. Google will eventually get to the point where it'll ask if you are cool with Gemini accessing more of your data through their services, but to start, they want you to allow Gemini to access your Google Search history. You don't have to, it's just that this is the first step towards their new "personalization" initiative. Google announced today that their Gemini 2.0 Flash Thinking model (which is experimental) can get more personal by you giving it access to Search history. This should allow for responses to be "uniquely insightful and directly address your needs." What exactly does that mean? Well, Google really wants you to use Gemini to research topics or brainstorm ideas. By giving it access to your Search history, Google thinks it can really bring you personal results back. They offer the following examples as ideas to try first: Seeing those, you can probably imagine that it could come up with quite a bit of personal info for summer vacations if it knows about all of the things you've been searching for. Same goes for hobbies and jobs, assuming you constantly look up stuff through Google Search. Following this experimental access of your Search history, the next step is to allow access to more Google apps, like Google Photos and YouTube. That expansion will happen in the coming months. Worried about privacy? You probably should be, since you are giving AI access to your world. And to be honest, Google didn't necessarily say that this is the most private thing they've ever introduced, because it isn't. You are giving Google's AI models access to your history to give you more personal responses. This is just the future, assuming you are down for it. Google did attempt to layout their "robust privacy safeguards" with this personalization thing, but it's basically Google just explaining that you can always revoke permissions and that they'll constantly remind you that you've given them this permission. Here's what they said today: If you'd like to use Gemini with personalization, you can connect it with your Search history at this link. Once you've connected, you'll head to the Gemini portal on the web and flip the top to the "Personalization (experimental)" option (see image above). This will go live to all Gemini and Gemini Advanced users on the web today. It will gradually rollout to mobile apps.
[46]
Gemini will reportedly see your Google search history
Google's AI chatbot, Gemini, may soon undergo a significant upgrade that enables it to access users' search history for more personalized interactions. This feature, known as 'Gemini Personalization,' was uncovered in an APK teardown of the Google app beta (version 16.8.31) by Android Authority. The Gemini Personalization model is designed to tailor assistance based on previous user searches. Upon opting in, it would access Google Search history to provide responses better aligned with individual interests. This feature requires users to have the 'Web & App Activity' setting enabled. For users concerned about privacy, a confirmation pop-up indicates that data will not be shared across different Gemini models unless explicitly allowed. Access will be restricted to the Personalization model, and users will have the option to opt out. According to Android Authority, Gemini's beta version could summarize recent searches and highlight specific queries from the past two months. Although still in an experimental stage, these capabilities could benefit frequent Google Search users. iPhone users can now access Google Gemini faster than ever Anticipation surrounds the rollout of the Personalization model, which is expected to be available exclusively for users of Gemini Advanced, requiring a Google One subscription priced at £18.99/$20 per month. An official announcement regarding the launch date has not yet been made. The functionality of Gemini Personalization remains unofficially acknowledged by Google, but Android Authority reported accessing its features through the latest app code. Gemini specifies that only the Personalization model will link to search history, ensuring that interactions in this model are not used to enhance other Gemini versions, do not leave digital traces beyond the chat history, and will be deleted within 60 days. To utilize this feature, users will need to grant permission for Gemini to view their search history and enable Web & App Activity in their Google settings. This grants users the ability to ask specific questions about prior searches or queries answered based on their search history. For example, users could ask Gemini about a restaurant searched for recently or request recommendations for tourist sites in NYC referencing previous searches. The chatbot may respond with tailored suggestions, such as recommending locally owned Italian restaurants based on the user's previous interests. The extent of search history accessible to Gemini is not yet fully determined. The APK teardown indicated that Gemini could reference searches dating back several months, potentially as far as January. While the current version may be limited, expanded access to more extensive historical data could enhance its capabilities. Users can currently locate over a decade of their Google Search history on their Google Activity page, suggesting that if broader access is allowed, it would significantly transform the user experience with Gemini.
[47]
You can now personalize Google Gemini
Google has introduced new personalization features for its Gemini chatbot, rolled out in experimental mode on Thursday. The updates include the ability for Gemini to reference users' Google Search histories to provide more relevant recommendations. This opt-in feature allows Gemini to utilize search history data within its conversational AI, enhancing the user experience. Additionally, users can now connect various Google apps to Gemini, including Calendar, Notes, Tasks, and Photos. The company announced that it will make Gems, a custom AI helper for tasks, available to all users. Gems enables users to customize Gemini to act as a personal AI expert on any topic. These developments align with Google executives' efforts to strengthen their position in the competitive AI industry. DeepMind co-founder Demis Hassabis indicated in December that he aims to "turbocharge" the Gemini app this year, highlighting the significance of scaling Gemini for consumers as a primary focus for the upcoming year. On Wednesday, Google also launched its open-source Gemma 3 models for developers, which can analyze text, images, and short videos, claiming it to be "the world's best single-accelerator model" operable on one GPU. Additionally, Google introduced two new AI models, Gemini Robotics and Gemini Robotics-ER (extended reasoning), both capable of operating on Gemini 2.0, which Google describes as its "most capable" AI to date. Leveraging its advantage in Search, Google aims to make Gemini distinctively relevant. With the new personalization features, Gemini can analyze user queries to determine if referencing Search history will provide enhanced responses. This feature operates under the Gemini 2.0 Flash Thinking Experimental model and will refer to search results only when deemed helpful by the AI. For example, if users inquire about restaurant recommendations, Gemini may consider recent food-related searches. The broader personalization rollout will eventually connect Gemini to additional apps like YouTube and Google Photos, enabling the chatbot to offer deeper insights based on user activities and preferences. Users have the option to disconnect their search history from Gemini at any time, and responses will outline how the information was obtained, including references to saved information, past conversations, or Search history. A "clear banner" will facilitate the disconnection process. Gemini and Gemini Advanced subscribers on the web can enable the experimental personalization feature through the model drop-down menu. This feature is currently being gradually rolled out on mobile and is accessible in over 40 languages across a majority of countries. Other updates include a free option for all Gemini users to create personal AI assistants known as Gems. Furthermore, the Gemini 2.0 Flash Thinking Experimental model is being integrated into the Deep Research feature, enhancing the chatbot's capabilities across all research stages. Integration with Calendar, Notes, Tasks, and Photos is being upgraded to the Gemini 2.0 Flash Thinking Experimental model, joining existing integration with YouTube, Search, and Google Maps.
[48]
Gemini Can Now Connect With Your Google Search History to Better Personalize Results
This Monitor Includes an Air Purifier (Yes, Really) for Cleaner Air at Your Desk Google's Gemini AI is getting more personal, using a not-so-surprising source. The popular AI chatbot can now use your Google search history to provide responses more tailored to you. Gemini Uses Search Results for Better Answers To tie your Google search history with Gemini, you can select Personalization (experimental) from the model drop-down menu. You will remain in control of your search history and disconnect from Gemini at any time. By being able to see recent Google searches, Gemini can help provide even better answers. For example, Google says when you ask Gemini for restaurant recommendations, it will reference recent food-related searches. Or, when you ask for travel advice, the AI will respond based on the destinations you've searched for. Gemini will only use your search history when it's determined that the data is actually helpful to a conversation. The feature is rolling out to all Gemini users on the web today and will "gradually" be rolling out to the Gemini mobile apps. In the coming months, the Personalization feature will expand to connect to other Google services, most notably Photos and YouTube. With stiff competition, it makes a lot of sense for Google to bring search results to Gemini. I'm interested to try it out and see how truly personalized the results are. Anyone Can Now Use Gemini's Deep Research Gemini's Deep Research feature originally rolled out in December 2024, but only for subscribers to Gemini Advanced. But Google has also announced that the research tool is now free for everyone to use and has been upgraded with Gemini 2.0 Flash Thinking Experimental. While Gemini is often used to answer simple questions, Deep Research is different. Think of it as a research assistant that can search and find information across the web on a subject, providing a detailed report with citations. Deep Research will even show its thoughts while browsing to show you a real-time look at how it's solving the research task. You can try it out by selecting Deep Research in the new prompt bar or the model drop-down.
[49]
Gemini Can Now Turn Your Documents Into Podcast-Style Discussions
Audio Overview was first released as a feature for NotebookLM Gemini is getting two new artificial intelligence (AI) features, Google announced on Tuesday. The Mountain View-based tech giant is adding Canvas, an interactive space that lets human users and AI collaborate on projects involving documents and coding-related tasks. Another feature making its way to Gemini is Audio Overview, which was previously exclusive to NotebookLM, and lets users generate an engaging podcast-like audio discussion based on documents, slides, and Deep Research reports. These features are currently being rolled out globally to both Gemini Advanced subscribers and those on the free tier. In a blog post, the tech giant announced the two new features that are being added to Gemini. This follows the Deep Research feature that can generate a detailed report on complex topics, and the iOS-exclusive lockscreen widgets. The new features -- Canvas and Audio Overview -- will be available on both Gemini on web and mobile apps. Canvas is a new interactive space on Gemini, which is aimed at letting users collaborate with the AI over certain projects. Users can now see a new Canvas button next to Deep Research in the text box on Gemini's interface. Selecting the feature and adding a document or lines of code will now open a sandbox where the AI creates a first draft based on the user's prompt, and then the user can take over to make edits and further refine the output with the help of the chatbot. Currently, Canvas only works with documents and coding-related tasks. For documents, users will have to upload a file, and then write a prompt while the Canvas button is selected. The user can say something like "Create a speech based on these classroom notes" and the AI will open a sandbox-style interface and write the draft. Then users can make manual edits or highlight portions of the text and ask Gemini to change the tone or regenerate content with specific feedback. Users can ask the AI to write code based on prompts. Then, with Canvas, they can ask Gemini to generate and preview the code and other web app prototypes to see a visual representation. This only works with HTML and React code currently. After the preview, the user can also request changes to input fields or call-to-action buttons, and see the updated preview. Notably, the feature is similar to OpenAI's Canvas feature, although ChatGPT only offers it on the web. Google said that after witnessing the popularity of the Audio Overview feature in NotebookLM, it is now bringing it to Gemini. The feature works with documents, slides, and even reports created using Deep Research. Whenever a file or response fits the criteria, the Gemini platform will show a floating action button (FAB) about the feature. Once a user taps the button, Gemini will begin generating a podcast-style audio discussion featuring two AI hosts, a male and a female voice, who will discuss the topic, draw connections between topics, and engage in a dynamic back-and-forth to provide unique perspectives. Notably, it can take a few minutes to generate an AI Overview. Gadgets 360 staff members spotted both the features on the web interface of Gemini, but not on the apps. Since Google is rolling out the feature globally, it may take a few days before all users gain access to them.
[50]
Google expands Gemini features, brings advanced AI tools to more users
The updates, outlined in Google's blog post, aim to improve user experience with artificial intelligence across tasks such as research, planning, and personal productivity.Google has announced a set of new updates to its Gemini AI features, aiming to make advanced tools more accessible. The updates, detailed in a blog post by the company, are designed to improve how users interact with artificial intelligence (AI) in everyday tasks, including research, planning and personal productivity. Flash Thinking model gets upgrades Gemini's 2.0 Flash Thinking Experimental model now supports file uploads and offers a longer context window of up to 1 million tokens for advanced users. This allows the model to handle more complex and detailed queries more efficiently. Deep Research expands globally The Deep Research tool is now rolling out in over 45 languages. It helps users gather and analyse web data, and now generates more detailed reports with improved reasoning. Earlier available to select users, it has now been made accessible to free-tier users a few times a month, while Advanced users get extended access. More personalised responses Gemini will now offer personalised replies based on a user's Google Search history -- helping with tasks like restaurant suggestions or travel planning. Users can turn off this feature at any time in their settings. More Google app integrations Gemini now works more closely with Google Calendar, Tasks, Notes, and soon, Google Photos. Users can now do multi-step actions in one prompt -- for example, finding a recipe, making a shopping list, and locating nearby grocery stores. Soon, it will also create travel itineraries from past trips or extract details from stored documents. Gems now free for all users Google is also making Gems, its custom AI assistant feature, free for all users. Users can build their own assistants -- like a translator or fitness planner -- through a simple setup process.
[51]
Google Makes Gemini Deep Research and Gems Free for All Users
Custom Gemini Gems are also available to free users. And the new 'Personalization' feature connects your search history to Gemini. Google is absolutely back in the AI race. After releasing the phenomenal Gemini 2.0 Flash Experimental model with native image generation capability, Google has now made the Deep Research agent free for all users. In case, you are unaware, Deep Research browses the web and examines all the information to generate a comprehensive report within minutes. It's just like ChatGPT's Deep Research agent. The best part is that Gemini's Deep Research agent is now powered by the new Gemini 2.0 Flash Thinking Experimental mode. It's an improved reasoning model from Google that can plan, search, reason, analyze, and create reports with insightful information. Note that Google says it's free for "a few times a month" and Gemini Advanced subscribers get expanded access. Apart from that, the new Gemini 2.0 Flash Thinking Experimental model supports file uploads and it's also available to free users. Gemini Advanced subscribers will have access to a larger 1 million context window which is great for analyzing large coding repositories. Next, Gemini Gems which lets you create custom Gemini chatbots, is now available to free users as well. Just like custom GPTs for ChatGPT, you can create a personalized AI chatbot for your need. You can set a custom instruction and upload your local files too. After that, Google has brought a new feature called 'personalization' to Gemini. It's powered by the Gemini 2.0 Flash Thinking Experimental model. With personalization turned on, Gemini can connect to your Google apps and services to deliver personalized responses. Currently, Gemini can access your Google search history to find your preference based on your search history. This feature is also available to free users. Finally, to make Gemini more personalized, Google is bringing deep integration with apps. You can now connect Calendar, Notes, Tasks, and Google Photos to Gemini. You can use these apps with the Gemini 2.0 Flash Thinking Experimental model. Basically, you will be able to perform multiple actions with Gemini on your Android smartphone.
[52]
Gemini gets experimental personalization, connects search and more
Google on Thursday introduced "Gemini with personalization," an innovative experimental feature designed to provide tailored responses by linking to users' Google apps and services. Built on the experimental Gemini 2.0 Flash Thinking model, this feature begins with Google Search integration. With user permission, it leverages past search history to deliver "uniquely insightful" answers that save time, said Dave Citron, Senior Director of Product Management for the Gemini app. In the coming months, Gemini will connect to Photos, YouTube, and other Google services to offer deeper, preference-based insights. Citron explained that users can enable "Personalization (experimental)" from the Gemini app's model drop-down menu to link their Search history. When a prompt is entered, Gemini assesses whether past searches can improve the response. Users have full control and can disconnect this feature at any time. Early testers have praised its help with brainstorming and personalized suggestions, and Google plans to refine it based on user feedback. Try prompts like: It's available now for Gemini and Gemini Advanced subscribers on the web, with a gradual mobile rollout, in over 45 languages across most countries. Google emphasized that users remain in control of their information with robust privacy safeguards: Users can already share specific interests -- like dietary restrictions or partner names -- for more natural conversations. Additionally, Advanced subscribers can reference past conversations for contextually richer responses. Citron also highlighted several exciting updates to the Gemini app: Gemini now links to Calendar, Notes, Tasks, and soon Photos. The 2.0 model can tackle multi-app prompts like finding a YouTube cookie recipe, adding ingredients to a shopping list, and locating nearby stores -- all in one request. In the coming weeks, Photos integration will allow users to create itineraries from their trip photos or retrieve details like driver's license expiration dates. Gems allow users to create custom AI experts -- for example, a translator, math coach, or meal planner -- at no cost. Create them in the "Gems manager" on the desktop, upload files, and interact with them anytime. Premade options are also available. This feature is rolling out globally in the coming weeks across 45+ languages. Most features launch today for Gemini and Advanced users, with mobile access and expanded language support coming soon. Deep Research and Gems are free to try, while personalization, file uploads, and the 1M context window are subscriber benefits. Speaking about the updates, Citron said,
[53]
Gemini gets Canvas and Audio Overview features in the latest update
Google on Tuesday introduced enhanced features for Gemini, aimed at promoting effective collaboration and creativity. The updates bring two new tools: Canvas and Audio Overview, designed to elevate user experiences with document creation, coding, and audio-based learning. Canvas, a new feature in Gemini, serves as an interactive space where users can create, refine, and collaborate on their work seamlessly. By selecting 'Canvas' in the prompt bar, users can edit and update documents or code with real-time changes visible instantly. This space helps in generating polished drafts, improving tones, adjusting lengths, and refining formatting with ease. Gemini's AI offers tailored suggestions to enhance sections, such as making a paragraph more concise or adjusting the style to be more professional or informal. For users working on speeches, essays, blog posts, or reports, Canvas becomes a valuable ally in maximizing creative output. Additionally, it provides a one-click export option to Google Docs, enabling smooth collaboration with others. Another significant update is Audio Overview, which transforms documents, slides, and research reports into podcast-style audio discussions. Dave Citron, Senior Director of Product Management for Gemini, highlighted that Audio Overview has generated significant enthusiasm in NotebookLM and is now being rolled out in Gemini. Audio Overview creates lively audio conversations between AI hosts, providing dynamic summaries, highlighting connections between topics, and offering fresh perspectives on uploaded files. Users can transform class notes, research papers, or lengthy email threads into engaging audio content to stay informed while multitasking. The feature is accessible via a suggestion chip above the prompt bar, making it easy to listen on the go.
[54]
Google unveils upgraded Gemini app features, free trials available By Investing.com
Investing.com -- Google (NASDAQ:GOOGL) has announced major upgrades to its Gemini app, introducing new features and enhancements. The tech giant is rolling out an advanced version of its 2.0 Flash Thinking Experimental model, which now includes additional features such as file upload. The model, designed to break down prompts into a sequence of steps to improve reasoning capabilities and provide enhanced responses, now offers increased efficiency and speed. Gemini Advanced users will now have access to a 1M token context window with 2.0 Flash Thinking Experimental. This feature allows users to tackle more complex problems by analyzing extensive amounts of data. Additionally, Google is expanding the availability of its Deep Research tool, which was launched in December. This tool acts as a personal AI research assistant, capable of searching and synthesizing information from across the web in minutes. The Deep Research tool is now integrated with Gemini 2.0 Flash Thinking Experimental, enhancing its capabilities at all stages of research. This integration allows Gemini to create high-quality, multi-page reports that are more detailed and insightful. Starting today, Google is making Deep Research available to all users in over 45 languages. Gemini users can use Deep Research a few times a month at no cost, while Gemini Advanced users receive expanded access to the tool. Google is also introducing a new experimental feature called personalization, powered by Gemini 2.0 Flash Thinking Experimental. This feature enables Gemini to connect with a user's Google apps and services, starting with Search, to deliver responses that are more tailored to the individual's needs. Users can disconnect their Search history from Gemini at any time. Google is also enhancing the connectivity of Gemini with additional apps including Calendar, Notes, Tasks, and Photos. These apps will be available on 2.0 Flash Thinking Experimental, enabling Gemini to handle complex requests involving multiple apps. In the coming weeks, Google Photos will be added to the list of apps that Gemini can interact with. Lastly, Google is rolling out Gems, a feature that allows users to customize Gemini, creating a personal AI expert on any topic. This feature is now available to all users at no cost in the Gemini app. Users can get started with one of the pre-made Gems or create their own custom Gems, such as a translator, meal planner, or math coach.
Share
Share
Copy Link
Google introduces new personalization features for Gemini AI, allowing it to access users' search history for more tailored responses, alongside improvements in research capabilities and content creation tools.
Google has unveiled a significant update to its Gemini AI platform, introducing personalization features that allow the chatbot to leverage users' search history for more tailored responses. This move marks a notable shift in Google's AI strategy, aiming to create a more individualized and context-aware user experience 13.
The new feature, powered by the experimental Gemini 2.0 Flash Thinking model, enables Gemini to access and analyze users' search history to provide more relevant and personalized answers. Google emphasizes that this feature is opt-in and can be disabled at any time, with clear indicators when the AI is accessing personal data 13.
Dave Citron, Gemini product director, stated, "These updates are all designed to make Gemini feel less like a tool and more like a natural extension of you, anticipating your needs with truly personalized assistance" 3.
Alongside personalization, Google has introduced several other improvements to Gemini:
Google has also enhanced Gemini's research capabilities:
This update positions Google competitively in the AI chatbot market, addressing the growing demand for more personalized and capable AI assistants. However, it also raises potential privacy concerns, which Google aims to mitigate through opt-in mechanisms and transparent data usage 13.
As the AI landscape continues to evolve, Google's focus on personalization and enhanced capabilities suggests a future where AI assistants become increasingly integrated into users' daily lives and workflows 35.
Reference
[1]
Google's NotebookLM, an AI tool for creating podcast-like audio summaries, might be integrated into the Gemini app on Android, potentially revolutionizing how users consume and learn from digital content.
3 Sources
3 Sources
Google hints at upcoming features for Gemini Advanced, including video generation tools, AI agents, and improved language models, signaling a significant leap in AI capabilities and user experience.
13 Sources
13 Sources
Google's Gemini app now offers Audio Overviews, an AI-powered feature that transforms documents, presentations, and Deep Research reports into podcast-style conversations, enhancing user engagement with content.
4 Sources
4 Sources
Google introduces an AI-powered feature that converts text notes into engaging podcast-style discussions. This innovative tool, part of the NotebookLM app, uses artificial intelligence to generate conversations between two AI hosts based on user-provided notes.
13 Sources
13 Sources
Google has released an experimental version of Gemini 2.0 Advanced, offering improved performance in math, coding, and reasoning. The new model is available to Gemini Advanced subscribers and represents a significant step in AI development.
11 Sources
11 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved