The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Fri, 2 May, 8:02 AM UTC
5 Sources
[1]
Gemini's next trick makes ChatGPT's memory look basic
The post also hinted that Gemini would become more proactive. What exactly that would look like remains unclear, but the possibilities are wide open. In theory, Gemini might eventually surface helpful suggestions by understanding your schedule, recent searches, or inbox activity. Of course, this kind of deep integration naturally brings up privacy concerns. While Google already holds all this information from our lives across its services, letting an AI assistant access and synthesize it into one continuous profile may feel like a bigger leap. The company says it will require explicit permission before Gemini can access any of this data. However, users may still wonder how that information is used, stored, or potentially leveraged beyond their immediate interaction.
[2]
Gemini will soon tap into your Google account data to become a more 'personal' AI
The latest experiment from Google Labs uses Gemini to help teach you a new language Summary Gemini AI will begin using personal data from your Google account to provide a more personalized experience. The information comes from a post on X from Google's VP of AI. Gemini will access your Gmail, Calendar, YouTube history, Photos, and other Google services. It may come as a shock for people to learn Google's Gemini AI isn't already tapping into everyone's emails and calendars. Apparently, it's true, but not for long. The powerful AI is about to turn your Android device into a super-personalized assistant. Related How I use Gemini to create and update notes in Google Keep A few simple prompts can make managing your notes easier Posts 1 Google VP Josh Woodward, who oversees the Gemini app and leads Google Labs, laid out the plans for Gemini to become the most "personal, proactive, and powerful" AI assistant yet. The biggest change will come from Gemini using your own Google account data to help you more effectively. Personalized AI thanks to your online life Source: X.com Woodward took to X to outline what's internally referred to as 'pcontext,' or personalized context. It would allow Gemini to access most of your Google stack, including the following: Gmail Photos Calendar YouTube history There was no mention if Gemini will be able to access your Google searches, chats, Keep notes, or docs. The post also didn't clarify if this will also apply to Google Workspace accounts, or only personal accounts. The idea is that Gemini could draw from this information to offer more personalized responses. But it goes beyond simply knowing your past chats. The goal is for Gemini to proactively anticipate your needs before you even ask. That could look like surfacing calendar reminders or pulling up relevant documents without a prompt. Woodward calls this 'anticipatory AI.' Source: X.com Gemini 2.5 Pro and Flash already support advanced multimodal capabilities. These AI models can generate code, video, and images. Google wants to build on that. It will combine all this personalized context into the broader ecosystem. Free for students, and other features Source: X.com Google has been crazy about Gemini lately. The company launched Gemini 2.5 Flash, enabled image upload and editing, added LaTeX support, and announced free Gemini plans for all US students. All that in only the last few weeks. Woodward said similar access will expand to more countries soon. Naturally, there are some privacy concerns around this. Google says all access will require explicit user permission. Woodward mentioned we'll all learn more when Google I/O kicks off at the end of this month.
[3]
Gemini Live could soon gain one of Gemini's best tricks
Google Clock's leaked 'Expressive' redesign reveals Material You's evolution Summary Google seems to be working on expanding Gemini Apps support to Gemini Live. Apps in Gemini extend its functionality by connecting to various services like Google Drive, Spotify, and others. Gemini Live with Apps support could potentially unlock new capabilities. Gemini Live turns Google's Project Astra into a reality. It lets you share your screen or camera feed with Google's AI chatbot for direct analysis. The feature got a wide release in April, with Google soon after making it free to use for everyone. Now, Google seems to be planning to bring one of the best aspects of Gemini to Gemini Live: Extensions Apps. Related Google's Circle to Search could find a new home within Gemini Live What is this, a crossover episode? Posts Apps in Gemini allow you to connect the AI chatbot to other services and apps, like Google Drive, Gmail, Google Home, Spotify, and more. It extends Gemini's functionality, enabling you to directly search for or summarize files on Drive, control your smart home via natural language commands, set alarms, make calls, etc. A teardown of the latest Google app beta for Android (v16.17.38.sa.arm64 beta) from Android Authority points to Google working on expanding Gemini apps support to Gemini Live, which should dramatically increase its usefulness. There appear to be early references to the feature in the current beta, so it may be some time before app access officially rolls out in Gemini Live. It also seems Google will expand support for apps to Live in phases, much like it did with Gemini. Currently, you can use Gemini Live with camera feed or screen sharing to ask questions, find new things, and brainstorm ideas. With app support, you could possibly point your phone's camera at a smart home light and ask Gemini Live to control it. Likewise, you could point to your smart speaker through Gemini Live and ask it to play music on it through Spotify. Gemini apps could make Gemini Live more powerful Close App support in Gemini Live could unlock a range of new capabilities that aren't currently possible. Essentially, Gemini Live may eventually deliver the same functionality as regular Gemini. This is speculation, though, as the early references in the Google app do not reveal how Gemini Live will work with apps. While useful on phones, this upcoming Gemini Live functionality would be even more handy on Google's Android XR-powered smart glasses, which it is making in collaboration with Samsung. Besides the above, Google is working on bringing Gemini Live's camera feature to Google Search's new AI mode. Unlike App support, the feature seems almost ready for rollout.
[4]
Google's prepping to make Gemini Live the assistant you've been waiting for
Summary Currently, Gemini Live lacks Gemini Apps access for external services integration. Google is working on supporting Gemini Apps on Gemini Live. It's unclear when the rollout will begin or which Gemini Apps will be initially supported. Gemini Live is one of Gemini's more compelling features, letting you interact with Google's AI chatbot much in the same way you would a real person: a back-and-forth exchange without awkward pauses or the need to interject "hey Google" before each sentence you say. It's a cool feature, but it's also limited. Currently, Gemini Live is unable to access Gemini Apps, the integrations (formerly known as Extensions) that connect Gemini to other applications and services. According to an APK teardown from Android Authority, though, that could be changing soon. Gemini Apps allow Gemini to interact with external services -- for example, to control smart home devices in your house, or to play a song on Spotify. At present, this capability is missing from the conversational Gemini Live interface. But according to Android Authority's teardown of the latest beta version of the Google app (16.17.38.sa.arm64 beta), Google is working to support Apps when using Gemini Live. Code contained in that release reviewed by Android Authority contains the string "EXTENSIONS_ON_LIVE_PHASE_ONE," which seems to point to Gemini Apps coming to the Live interface, though the reference uses the older Extensions language. The phase one part makes it seems as though the rollout of Gemini Apps within Gemini Live will be a staged process, with some Apps becoming available right away and others to come later. Yet another Google I/O announcement to look forward to? AA's APK teardown doesn't provide much context, only hinting that Google is working to bring Gemini Apps to Gemini Live. We don't know which apps are coming to the conversational interface, much less when to expect the upgraded experience to roll out. Google I/O kicks off in just a couple of weeks, though, so it's possible we'll hear more then.
[5]
Gemini Live Might Soon Perform App-Based Tasks
Google recently rolled out the Gemini Live with Camera feature Google is reportedly working on an expansion of Gemini Live by allowing it to connect to apps. As per a new report, the Mountain View-based tech giant is working on a way to let Gemini Live perform certain app-based tasks without requiring intervention from the user. Gemini Live is a two-way, real-time voice conversation feature where the user can verbally ask the artificial intelligence (AI) chatbot queries, and it can respond in a human-like manner. Recently, Google rolled out a new feature that lets Gemini Live access the device's camera to answer queries about their surroundings. According to an Android Authority report, the tech giant might be developing a new capability in Gemini Live to let it connect with different apps. The publication unearthed evidence of the feature while conducting an Android application package (APK) teardown. The strings of code relating to the capability were reportedly spotted in the Google app for Android beta (version 16.17.38.sa.arm64). One particular string in the code of the latest version of the app reportedly contains the phrase "Extensions_on_Live_Phase_One." While Google has replaced the "extensions" branding with "apps" on the Gemini app, the term might still be used internally, or the code could be older. Similarly, the word Live suggests the feature is meant for Gemini Live, since the tech giant is widely marketing all similar features with the same branding. Finally, "Phase One" could hint at Google's plans of integrating the AI feature with apps in multiple steps. The company took a similar approach with the Gemini AI assistant, which was integrated with different first-party and third-party apps over a few months. Interestingly, this is the only piece of information that the publication was able to extract. This means there is no information on how Gemini Live might connect to apps, whether it will only connect to first-party or third-party apps, or the kind of tasks it might be able to perform. However, if the speculations are correct, it would corroborate with an email Gemini Advanced users received from Google. In the email, the company teased that at Google I/O 2025, which is set to be held between May 20-21, it will announce AI features that will "open up new possibilities for interacting with and leveraging Gemini."
Share
Share
Copy Link
Google is preparing to significantly upgrade its Gemini AI assistant by integrating personal user data and expanding app connectivity, promising a more personalized and proactive AI experience.
Google is set to take a significant leap forward in AI technology with its plans to enhance Gemini, its advanced AI assistant. The upcoming changes promise to transform Gemini into a more personalized, proactive, and powerful tool, potentially surpassing the capabilities of competitors like ChatGPT 12.
At the heart of this transformation is 'pcontext' or personalized context. Google VP Josh Woodward revealed that Gemini will soon be able to access users' personal data from various Google services, including Gmail, Photos, Calendar, and YouTube history 2. This integration aims to provide more tailored and contextually relevant responses to user queries.
The enhanced Gemini is expected to become more anticipatory, potentially offering suggestions based on a user's schedule, recent searches, or inbox activity without explicit prompts 1. This proactive approach could revolutionize how users interact with their digital assistants, making them more intuitive and helpful in daily life.
Google is also working on expanding the capabilities of Gemini Live, a feature that allows real-time voice conversations with the AI. Plans are underway to integrate Gemini Apps (formerly known as Extensions) into the Live interface, which would enable users to interact with external services and apps through voice commands 34.
With the increased access to personal data, Google has emphasized that explicit user permission will be required before Gemini can access this information 12. However, this deep integration has naturally raised privacy concerns among users, who may wonder about the extent of data usage and storage.
Evidence suggests that Google is planning a phased approach to implementing these new features, similar to how it introduced Gemini AI assistant integrations 5. The company is expected to reveal more details about these developments at the upcoming Google I/O event, scheduled for May 20-21 25.
These advancements could significantly alter the landscape of AI assistants, potentially setting a new standard for personalization and functionality. By leveraging the vast amount of data Google has access to, Gemini could offer a level of personal assistance previously unseen in consumer AI products 12.
The enhanced Gemini is likely to have far-reaching effects across Google's product ecosystem. It could potentially integrate with Google's rumored Android XR-powered smart glasses, further expanding its utility in various contexts 3. Additionally, Google has announced plans to offer free Gemini access to US students, with plans to expand this offering to other countries 2.
As Google continues to push the boundaries of AI technology, these developments in Gemini showcase the company's commitment to creating more intelligent, context-aware, and user-centric AI assistants. The tech world eagerly awaits the official unveiling of these features and their potential impact on how we interact with AI in our daily lives.
Reference
[1]
[3]
[5]
Google is developing new features for Gemini Live, including conversational interactions with uploaded files and enhanced video query capabilities, aiming to create a more intuitive and versatile AI assistant experience.
6 Sources
6 Sources
Google hints at upcoming features for Gemini Advanced, including video generation tools, AI agents, and improved language models, signaling a significant leap in AI capabilities and user experience.
13 Sources
13 Sources
Google is rolling out a new Utilities extension for Gemini, enhancing its functionality to match and potentially surpass Google Assistant on Android devices. This update brings Gemini closer to becoming a comprehensive virtual assistant.
10 Sources
10 Sources
Google introduces Gemini Live, a premium AI-powered chatbot to rival OpenAI's ChatGPT. The new service offers advanced features but faces scrutiny over its pricing and rollout strategy.
6 Sources
6 Sources
Google is testing a new interface for Gemini Live that makes interacting with the AI assistant feel more like a phone call, potentially changing how users perceive and interact with AI.
4 Sources
4 Sources