17 Sources
[1]
Google Announces AR Glasses, More Gemini in Chrome, 3D Conferencing and Tons More at Google I/O
As you'd expect, this year's Google I/O developer's conference focused almost exclusively on AI -- where the company's Gemini AI platform stands, where it's going and how much it's going to cost you now for its new AI Ultra subscription plan (spoiler: $250 per month). Meanwhile, a new Flow app expands the company's video-generation toolset, and its Android XR glasses make their debut. Plus, all AI usage and performance numbers are up! (Given that a new 42.5-exaflop Ironwood Tensor processing unit is coming to Google Cloud later this year, they'll continue to rise.) Google's Project Aura, a developer kit for Android XR that includes new AR glasses from Xreal, is the company's next step in the company's roadmap toward glasses-based, AI-driven extended reality. CNET's Scott Stein goes in-depth in an exclusive interview with Shahram Izadi, Google's VP and GM for Android XR, about that future. And headset-based Project Moohan, developed in conjunction with Samsung, is now available, and Google's working with Samsung to extend beyond headsets. For a play-by-play of the event, you can read the archive of our live blog. Google already held a separate event for Android, where it launched Android 16, debuting its new Material 3 Expressive interface, updates to security and an update on Gemini integration and features. A lot of the whizzy new AI features are only available via one of its subscription levels. AI Pro is just a rebranding of Google's $20-per-month Gemini Advanced plan (adding some new features), but Google AI Ultra is a pricier new option -- $250 per month, with half off the first three months for the moment -- that provides access to the latest, spiffiest and least usage-limited of all its tools and models, as well as a prototype for managing AI agents and the 30 terabytes of storage you're going to need to store it all. They're both available today. Google also wants to make your automation sound smarter with Personalized Smart Replies, which makes your generated answers sound more like you, as well as plowing through pieces of information on your device to provide relevant information. It'll be in Gmail this summer for subscribers. Eventually, it'll be everywhere. One of Google's boasts was Gemini's victory in Pokemon Blue. Staff writer Zach McAuliffe has questions, because you do NOT mess with his childhood memories. Also, it includes lots of better models, better coding tools and other details on developer-friendly things you expect from a developer conference. The announcement included its conversational Gemini Live, formerly part of Project Astra, its interactive, agentic, voice AI, kitchen sink AI app. (As Managing Editor Patrick Holland says, "Astra is a rehearsal of features that, when they're ready for the spotlight, get added to Gemini Live.") And for researchers, NotebookLM incorporates Gemini Live to improve its... everything. People (that is, those over 18) who pony up for the subscriptions, plus people on the Chrome Beta, Dev and Canary tracks, will be able to try out the company's expanded Gemini integration with Chrome -- summary, research and agentic chat based on the contents of your screen, somewhat like Gemini Live does for phones (which, by the way, is available for free on Android and iOS as of today). But the Chrome version is more suited to the type of things you do at a computer rather than a phone. (Microsoft already does this with Copilot in its own Edge browser.) Eventually, Google plans for Gemini in Chrome to be capable of synthesizing using multiple tabs and voice navigation. The company is also expanding how you can interact with its AI Overviews in Google Search as part of AI Mode, with interactions with AI Overviews and more agentic shopping help. It's a new tab with search, or on the search bar, and it's available now. It includes deeper searches, Personal Context -- which uses all the information it knows about you, and that's a lot -- to make suggestions and customize replies. The company detailed its new AI Mode for shopping, which has an improved conversational shopping experience, a checkout that monitors for the best pricing, and an updated "try on" interface that lets you upload a photo of yourself rather than modeling it on a generic body. We have reservations about the feature -- it sounds like a privacy nightmare, for one thing, and I don't really want to see clothes on the "real" me for another. Google plans to launch it soon, though the updated "try on" feature is now available in the US via Search Labs. Google Beam Formerly known as Project Starline, Google Beam is the updated version of the company's 3D videoconferencing, now with AI. It uses a six-camera array to capture all angles of you, which the AI then stitches together, uses head tracking to follow your movements, and sends at up to 60 frames per second. The platform uses a light field display that doesn't require wearing any special equipment, but that technology also tends to be sensitive to off-angle viewing. HP is an old hand in the large-scale scanning biz, including 3D scanning, so the partnership with Google isn't a big surprise. Flow and other generative creative tools Google Flow is a new tool that builds on Imagen 4 and Veo 3 to perform tasks like creating AI video clips and stitching them into longer sequences, or extending them, with a single prompt while keeping them consistent from scene to scene. It also provides editing tools like camera controls. It's available as part of Gemini AI Ultra. Imagen 4 image generation is more detailed, with improved tonality and better text and typography. And it's faster. Meanwhile, Veo 3, also available today, has a better understanding of physics and native audio generation -- sound effects, background sounds and dialogue. All this is available under the AI Pro plan. Google's Synth ID gen AI detection tool is also available today.
[2]
Your Google Gemini assistant is getting 8 useful features - here's the update log
At Google I/O 2025, the company teased upcoming features coming to its latest AI assistant. Here's the rundown. Google Gemini already offers a host of useful capabilities. From generating text and creating images to live conversations, deep research, and analyzing files, Google's AI has proven itself a strong contender in the AI field. Also: Everything announced at Google I/O 2025: Gemini, Search, Android XR, and more At Google I/O 2025 on Tuesday, the company revealed a slew of new and improved features now available with its AI assistant. 1. New Google AI Pro and Ultra plans First up are two new Google AI subscriptions that offer more features but naturally come with their own price tags. The first plan is known as Google AI Pro, which is actually the same AI Premium plan that's been around for a while just with a new name. Still priced at $20 per month, AI Pro offers the same AI features available with the free version of Gemini but adds higher rate limits and special features. AI Pro also includes the Gemini app formerly known as Gemini Advanced, along with products like NotebookLM and the new Flow AI video editor. Those two features will reach AI Pro subscribers in the US first and then expand to other countries. College students in the US, the UK, Brazil, Indonesia, and Japan can get a free school year of Google AI Pro. If you need more power and features and are willing to spend the big bucks, there's also a Google Al Ultra plan. This one offers the most powerful models, the highest rate limits, and early access to experimental Al features. Also: Google unveils its $250-per-month AI Ultra subscription - what's included As one example, the Ultra plan will grant you early access to Agent Mode, a new desktop-based agentic tool that will carry out tasks for you. Just describe your request or question; in response, the agent browses the web, conducts its own research, and integrates with your Google apps to tackle complex, multi-step tasks from start to finish. The Ultra plan costs a hefty $250 a month, though first-time subscribers get 50% off for the first three months. 2. Gemini Live Next is Gemini Live, the handy chat mode in which you carry on a back-and-forth voice conversation with the AI. Previously, only Android users could share their screen or camera view and ask Gemini questions about it. Now, Google is expanding this feature so that Android and iOS users alike will be able to use the camera and screen sharing. Also: Gemini Live screen sharing and camera is now available to everyone - for free To try this, open the Gemini app on your iPhone or Android device and tap the Gemini Live icon to the right of the prompt. The camera icon at the bottom lets you aim your phone at any object or scene and ask Gemini to describe it or answer questions about it. The second icon allows you to share any screen on your device for Gemini to analyze. There's more: In the coming weeks, Gemini Live will work with other Google apps and services, including Google Maps, Calendar, Tasks, and Keep. This means you'll be able to ask Gemini Live to perform such tasks as creating a calendar appointment or providing directions to your next destination. 3. Imagen 4 image generation Previously, Google used its Imagen 3 model to generate images based on your descriptions. Now, the company has upgraded to Imagen 4, which it claims will offer faster performance, more lifelike details, and better text output. Anyone will now be able to try Imagen 4 via the Gemini mobile app. 4. Veo 3 video generation Also getting an upgrade is Gemini's Veo video generator. Moving up from Veo version 2, Veo 3 offers native audio generation with support for dialogue between characters, background noises, and sound effects. As Google describes it, you can now add anything from bustling city sounds to the rustle of leaves to character dialogue just from your text descriptions. The main barrier here is that Veo 3 will be available only to Google AI Ultra subscribers in the US. 5. Canvas enhancements Google's Canvas tool offers you an interactive and collaborative workspace in which you can create code, design web pages, and devise other visual content, with the results appearing side-by-side in real-time. Using the latest Gemini 2.5 model, Canvas promises to be more intuitive and powerful, according to Google. Also: Google Beam is poised to bring 3D video conferencing mainstream You can create interactive infographics, quizzes, and podcast-style Audio Overviews in any one of 45 languages. With Gemini 2.5 Pro's coding skills, Canvas is now more adept at converting your ideas into actual code, thereby helping you develop full applications. 6. Interactive quizzes Trying to learn a complicated new subject? Gemini may be able to help. You can now ask the AI to create a quiz on your topic of interest. In response, Gemini challenges you with a series of questions designed to expand your knowledge. As you answer each question, the AI will tell you how you're doing and focus on any areas that need special attention. This feature is now rolling out to all Gemini users on desktop and mobile devices. 7. Gemini-in-Chrome As of Wednesday, Gemini will begin popping up in Chrome on the desktop in both Windows and MacOS. Here, you'll be able to ask Gemini to analyze or answer questions about your current web page. Down the road, the AI will also work across multiple tabs and even launch different websites for you. Also: Meet Gemini-in-Chrome, your new AI browsing assistant - here's who gets to use it Sounds helpful, but access will be limited. Gemini-in-Chrome will be available only to Google Al Pro and Google Al Ultra subscribers in the US who use English as their language in the browser. 8. Deep Research Finally, Gemini's Deep Research mode is an agentic tool that can conduct online research for you and present the results in a detailed report, all on its own. Previously, Deep Research was only able to consult websites for the information you needed. Now, it can also check out your own PDFs and images. This means you could tell Gemini to include trends and topics that have already been captured in your own personal or work files. In one example cited by Google, a market researcher could upload internal sales figures stored in a PDF to cross-reference with public market trends. In another example, an academic researcher could tell Gemini to consult downloaded journal articles to add to a review of online literature. As one more item, Google said that it plans to integrate Deep Research with Google Drive and Gmail to expand the number of sources available. Also: Google gives AI-generated video a voice with Veo 3 - how to try it Whew, that's a lot to unpack. But with AI increasingly impacting both individuals and organizations, Google is showing that it's trying to stay competitive. And even with the pricey, new Ultra subscription, there's enough here for free Gemini users and AI Pro subscribers to try and see if and how they can take advantage of the latest developments.
[3]
Everything you need to know from Google I/O 2025
Google imagines a world powered by artificial intelligence. Credit: Google From the opening AI-influenced intro video set to "You Get What You Give" by New Radicals to CEO Sundar Pichai's sign-off, Google I/O 2025 was packed with news and updates for the tech giant and its products. And when we say packed, we mean it, as this year's Google I/O clocked in at nearly two hours long. During that time, Google shared some big wins for its AI products, such as Gemini topping various categories on the LMArena leaderboard. Another example that Google seemed really proud of was the fact that Gemini completed PokΓ©mon Blue a few weeks ago. But, we know what you're really here for: Product updates and new product announcements. Aside from a few braggadocious moments, Google spent most of those 117 minutes talking about what's coming out next. Google I/O mixes consumer-facing product announcements with more developer-oriented ones, from the latest Gmail updates to Google's powerful new chip, Ironwood, coming to Google Cloud customers later this year. We're going to break down what product updates and announcements you need to know from the full two-hour event, so you can walk away with all the takeaways without spending the same time it takes to watch a major motion picture to learn about them. Before we dive in though, here's the most shocking news out of Google I/O: The subscription pricing that Google has for its Google AI Ultra plan. While Google provides a base subscription at $19.99 per month, the Ultra plan comes in at a whopping $249.99 per month for its entire suite of products with the highest rate limits available. Google tucked away what will easily be its most visible feature way too far back into the event, but we'll surface it to the top. At Google I/O, Google announced that the new AI Mode feature for Google Search is launching today to everyone in the United States. Basically, it will allow users to use Google's search feature but with longer, more complex queries. Using a "query fan-out technique," AI Mode will be able to break a search into multiple parts in order to process each part of the query, then pull all the information together to present to the user. Google says AI Mode "checks its work" too, but its unclear at this time exactly what that means. AI Mode is available now. Later in the summer, Google will launch Personal Context in AI Mode, which will make suggestions based on a user's past searches and other contextual information about the user from other Google products like Gmail. In addition, other new features will soon come to AI Mode, such as Deep Search, which can dive deeper into queries by searching through multiple websites, and data visualization features, which can take the search results and present them in a visual graph when applicable. According to Google, its AI overviews in search are viewed by 1.5 billion users every month, so AI Mode clearly has the largest potential user base out of all of Google's announcements today. Out of all the announcements at the event, these AI shopping features seemed to spark the biggest reaction from Google I/O live attendees. Connected to AI Mode, Google showed off its Shopping Graph, which includes more than 50 billion products globally. Users can just describe the type of product they are looking for - say a specific type of couch, and Google will present options that match that description. Google also had a significant presentation that showed its presenter upload a photo of themselves so that AI could create a visual of what she'd look like in a dress. This virtual try-on feature will be available in Google Labs, and it's the IRL version of Cher's Clueless closet. The presenter was then able to use an AI shopping agent to keep tabs on the item's availability and track its price. When the price dropped, the user received a notification of the pricing change. Google said users will be able to try on different looks via AI in Google Labs starting today. Google's long-awaited post-Google Glass AR/VR plans were finally presented at Google I/O. The company also unveiled a number of wearable products utilizing its AR/VR operating system, Android XR. One important part of the Android XR announcement is that Google seems to understand the different use cases for an immersive headset and an on-the-go pair of smartglasses and have built Android XR to accommodate that. While Samsung has previously teased its Project Moohan XR headset, Google I/O marked the first time that Google revealed the product, which is being built in partnership with the mobile giant and chipmaker Qualcomm. Google shared that the Project Moohan headset should be available later this year. In addition to the XR headset, Google announced Glasses with Android XR, smartglasses that incorporate a camera, speakers, and in-lens display that connect with a user's smartphone. Unlike Google Glass, these smart glasses will incorporate more fashionable looks thanks to partnerships with Gentle Monster and Warby Parker. Google shared that developers will be able to start developing for Glasses starting next year, so it's likely that a release date for the smartglasses will follow after that. Easily the star of Google I/O 2025 was the company's AI model, Gemini. Google announced a new updated Gemini 2.5 Pro, which it says is its most powerful model yet. The company showed Gemini 2.5 Pro being used to turn sketches into full applications in a demo. Along with that, Google introduced Gemini 2.5 Flash, which is a more affordable version of the powerful Pro model. The latter will be released in early June with the former coming out soon after. Google also revealed Gemini 2.5 Pro Deep Think for complex math and coding, which will only be available to "trusted testers" at first. Speaking of coding, Google shared its asynchronous coding agent Jules, which is currently in public beta. Developers will be able to utilize Jules in order to tackle codebase tasks and modify files. Developers will also have access to a new Native Audio Output text-to-speech model which can replicate the same voice in different languages. The Gemini app will soon see a new Agent Mode, bringing users an AI agent who can research and complete tasks based on a user's prompts. Gemini will also be deeply integrated into Google products like Workspace with Personalized Smart Replies. Gemini will use personal context via documents, emails, and more from across a user's Google apps in order to match their tone, voice, and style in order to generate automatic replies. Workspace users will find the feature available in Gmail this summer. Other features announced for Gemini include Deep Research, which lets users upload their own files to guide the AI agent when asking questions, and Gemini in Chrome, an AI Assistant that answers queries using the context on the web page that a user is on. The latter feature is rolling out this week for Gemini subscribers in the U.S. Google intends to bring Gemini to all of its devices, including smartwatches, smart cars, and smart TVs. Gemini's AI assistant capabilities and language model updates were only a small piece of Google's broader AI puzzle. The company had a slew of generative AI announcements to make too. Google announced Imagen 4, its latest image generation model. According to Google, Imagen 4 provides richer details and better visuals. In addition, Imagen 4 is apparently much better at generating text and typography in its graphics. This is an area which AI models are notoriously bad at, so Imagen 4 appears to be a big step forward. A new video generation model, Veo 3, was also unveiled with a video generation tool called Flow. Google claims Veo 3 has a stronger understanding of physics when generating scenes and can also create accompanying sound effects, background noise, and dialogue. Both Veo 3 and Flow are available today alongside a new generative music model called Lyria 2. Google I/O also saw the debut of Gemini Canvas, which Google describes as a co-creation platform. Another big announcement out of Google I/O: Project Starline is no more. Google's immersive communication project will now be known as Google Beam, an AI-first communication platform. As part of Google Beam, Google announced Google Meet translations, which basically provides real-time speech translation during meetings on the platform. AI will be able to match a speaker's voice and tone, so it sounds like the translation is coming directly from them. Google Meet translations are available in English and Spanish starting today with more language on the way in the coming weeks. Google also had another work-in-progress project to tease under Google Beam: A 3-D conferencing platform that uses multiple cameras to capture a user from different angles in order to render the individual on a 3-D light-field display. While Project Starline may have undergone a name change, it appears Project Astra is still kicking it at Google, at least for now. Project Astra is Google's real-world universal AI assistant and Google had plenty to announce as part of it. Gemini Live is a new AI assistant feature that can interact with a user's surroundings via their mobile device's camera and audio input. Users can ask Gemini Live questions about what they're capturing on camera and the AI assistant will be able to answer queries based on those visuals. According to Google, Gemini Live is rolling out today to Gemini users. It appears Google has plans to implement Project Astra's live AI capabilities into Google Search's AI mode as a Google Lens visual search enhancement. Google also highlighted some of its hopes for Gemini Live, such as being able to help as an accessibility tool for those with disabilities. Another one of Google's AI projects is an AI agent that can interact with the web in order to complete tasks for the user known as Project Mariner. While Project Mariner was previously announced late last year, Google had some updates such as a multi-tasking feature which would allow an AI agent to work on up to 10 different tasks simultaneously. Another new feature is Teach and Repeat, which would provide the AI agent with the ability to learn from previously completed tasks in order to complete similar ones without the need for the same detailed direction in the future. Google announced plans to bring these agentic AI capabilities to Chrome, Google Search via AI Mode, and the Gemini app.
[4]
"Google AI/O?" The Internet reacts to Google I/O 2025
From $250 AI subscriptions to futuristic glasses and search that talks back, here's what people are saying about Tuesday's Google I/O. Google I/O 2025 just wrapped, and there's a clear takeaway: AI isn't just a tool anymore; it's the platform, and it dominated the event. The company flooded the keynote stage with Gemini news, hardware teases, and controversial subscription tiers. Some fans love ambition. Others? Not so much. Google's Gemini 1.5 Pro is now the brain behind your Android, Workspace apps, and a range of creative tools. It powers Gemini Flash (a faster, cheaper AI model), brings real-time smarts to your camera and voice assistant, and even appears in a redesigned AI-powered search. DeepMind CEO Demis Hassabis said the goal is an AI assistant that's "personal, proactive, and powerful." Tech investor Gene Munster called the event "a big deal," arguing Google has no choice but to prove it's "willing to take bold, outside-the-box steps to stay relevant". But alongside the hype exists legitimate worries about the pace at which Gemini is seemingly being shoehorned into Google's products. One analyst recently raised concerns about how Google's AI overviews are "often so confidently wrong" that they've lost all trust in them. Others have questioned whether users fully understand how much of their data is being fed back into training loops without always making that exchange transparent. You read that right. $249.99. One of the most notable announcements of the I/O event was the introduction of a new "AI Ultra" subscription plan. Those who sign up will receive the "highest level of access" to Google's fastest Gemini model, ad-free YouTube, and tools like Veo 3, the new Flow video editing app, and a new AI capability called Gemini 2.5 Pro Deep Think, which is yet to launch. Ultra is "for people that want to be on the absolute cutting edge of AI from Google," Josh Woodward, VP of Google Labs and Gemini, said during a press briefing. The reaction has been mixed at best. One redditor points out that for enterprise users, the price is "not bad," adding that "[I] can see companies buying subscriptions for their workers." Meanwhile, another user said, "Still doesn't get you API access, right? So I don't understand the point of these very expensive subscriptions..." AI Overviews were just the beginning of Google's bid to create the next-gen search engine. We're all pretty used to them by now, and whether you love or loathe them, it's clear that they're here to stay. Google says that these AI-generated search results have become "one of the most successful launches in search in the past decade." That's debatable. The company caught heat last year after these Gemini-powered AI search features offered some incorrect and downright bizarre answers, such as suggesting that people eat "one small rock a day." Talking to reporters before the 2025 I/O event, VP of search Liz Reid acknowledged last year's issues and described them as "edge cases and quite rare." Now, Google is doubling down with the launch of an "AI Mode" chatbot. "It's a total reimagining of search," said Google chief Sundar Pichai in a press briefing ahead of I/O. The company says the new feature will make interacting with its search engine more like having a conversation with an expert capable of answering a wide range of questions. Google has already launched AI mode in the US two and a half months after the company began testing it. One user, however, took to X.com to express his frustrations after attempting to use the new feature for the first time: In terms of hardware, Google showed off Android XR and teased next-gen smart glasses powered by Gemini and Project Astra, a real-time multimodal agent that can "see, hear, and talk" in context. It felt like a moonshot moment, but some readers may be skeptical with Google Glass still fresh in collective memory. While we're going to have to wait for more information on Google's own Android XR headset, they did show off its capabilities with a live translation using a smart glasses prototype. That's a genuinely useful application that would pair nicely with many of the accessibility-related AI applications we're seeing more of. While Google I/O 2025 delivered no shortage of cutting-edge announcements, a growing portion of the audience seems weary of the relentless focus on AI. X.com feeds and comment sections filled up with users expressing a sense of dΓ©jΓ vu. Meanwhile, users on the r/Google and r/Android subreddits said, "Take a drink every time they say 'AI'" and "It's gonna be like this every year from now on isn't it? Tuning in for Android only to get a boatload of AI talk." One user summed the general sentiment up perfectly: "Serious question, are they going to talk about something else than AI?" While many are impressed by what Google's AI tools can do, some are simply exhausted by the scale and speed of the shift. The feeling isn't rejection, it's saturation.
[5]
Google IO 2025 summary: 5 big announcements you'll want to know
Google I/O 2025 This story is part of our complete Google I/O coverage Updated less than 42 seconds ago Google IO 2025 delivered us a huge helping of AI during the almost two-hour opening keynote. Google's CEO, Sundar Pichai, and colleagues got through an awful lot on stage, and while some of the talk was aimed primarily at developers, there were plenty of big announcements for us - the people on the street - to explore. Recommended Videos 1. Google Beam: AI-enabled 3D video calls If you're someone who finds themselves spending a lot of time on video calls for work, Google Beam will be of interest. Beam is an "AI-first video communications platform", giving you glasses-free 3D visuals. With a Beam device, AI will take your video feed and convert you into a realistic 3D video model. The result is an effect that you're are sitting across the table from the person on the other end of the call, with Google claiming "near perfect head tracking, down to the millimeter, and at 60 frames per second." You'll need dedicated hardware to take advantage of its ability to convert 2D video into 3D models (like the six camera equipped screen from HP in the image above), so Beam isn't something you'll be using at home - at least for now. All the demo videos we've seen only show 1:1 conversations, with no mention on whether it can handle multiple people round a desk. The first Google Beam devices will be available later this year, so prepare yourself for a 3D meeting makeover. Google's also exploring real-time speech translation for Beam, allowing users to have a flowing, natural conversation even if they speak different languages. 2. Google AI Ultra: the VIP AI subscription Google has re-worked its AI subscription plans, with two paid tiers now available to consumers. Google AI Pro Google AI Pro is the new name for the AI Premium plan, and costs $19.99 per month. It includes; Gemini app - Gemini 2.5 Pro, Deep Research, Veo 2 Flow - access to the AI filmmaking tool with Veo 2 Whisk - higher limits for image-to-video creation NotebookLM Gemini in Gmail, Docs, Vids & more Gemini in Chrome (early access) 2TB of storage for Photos, Drive & Gmail Google AI Ultra If you really love Google's AI suite though and want to have access to the latest and greatest models, you'll need the Google AI Ultra plan. It's far from cheap though, at a staggering $249.99 per month. You get everything included in AI Pro, along with; Gemini app - highest limits, exclusive access to 2.5 Pro Deep Think and Veo 3 Flow - access to the AI filmmaking tool with Veo 3 Whisk - highest limits for image-to-video creation NotebookLM - highest limits and best model capabilities Gemini in Gmail, Docs, Vids & more - highest limits Project Mariner (early access) - AI agent research prototype YouTube Premium individual plan 30TB of storage for Photos, Drive & Gmail Google has kept the free tier as well, for more basic access to Gemini without a monthly bill. 3. Google Search: an AI makeover and a fun shopping feature Google Search is getting a host of new AI features, with the overarching addition being 'AI Mode' allowing you to ask more complex queries as well as follow-up questions. Google claims AI Mode can handle any question, and it's keen for you to search whatever's on your mind. AI Mode goes much deeper than traditional Search when it looks for a response, and it apparently checks its responses too, so hopefully no more AI hallucinations. The idea is for Search to offer you a more complete experience, without the need to visit multiple websites. It wants to help you buy tickets for events and make restaurant reservations (via Project Mariner AI agent), help with school projects (with Project Astra integration), and even show you what you'd look like in that jacket you've been eyeing up. Yes, really. 'Try On' is a feature which will arrive in the coming months, allowing you to upload a full length picture of yourself when you tap the 'Try On' button which appears over images of clothing in Google Shopping results. Complex analysis and data visualisation is coming to AI Mode later this summer for sports and finance questions, with the ability to display findings in AI-generated graphs, charts and more - not just text. Plus, Personal Context Mode will arrive in AI Mode this summer - where Search will also be able to scan your inbox to provide more personalized responses. 4. Google Flow: AI-powered filmmaking Google announced new versions of its AI image generator (Imagen) and AI video generator (Veo) during the IO 2025 keynote, and both of these systems are used in its new AI filmmaking tool, Google Flow. Imagen 4 improves on the company's AI text-to-image generation, with enhanced picture quality and enhanced processing speeds. The big advancement here though is its ability to properly handle characters and text in images. Imagen can now properly format text, and place it in sensible places in your images. Veo 3 takes Google's AI video generator out of the silent age and into the audio era. It's now able to add background sound, sound effects and dialogue to the videos generated from your prompts. You can use these AI models standalone, but Google Flow brings these tools (along with Gemini) together into a complete package for filmmakers to create cinematic clips. You can even mix your own video clips and imagery with the AI generated content to fine tune your creation. There are tools to allow you to shorten sections of the video, extend other sections by adding additional prompts, and control the camera direction and angle for the best perspective in each scene. Google Flow is currently available to Google AI Pro and Google AI Ultra plan subscribers in the US. 5. Android XR: smart glasses are closer to reality Perhaps the most exciting part of the entire IO 2025 keynote was towards the end where Google focused in on its Android XR platform for VR headsets and AR glasses. We've already heard about its collaboration with Samsung on the Project Moohan VR handset, and Google confirmed on stage that its Apple Vision Pro competitor will be available to buy later this year. What was more impressive was the live demo of Google's Gemini-powered AI smart glasses. We saw how Gemini was able to show messages, accurately identify the subject matter in a series of photographs on a wall, remember where the wearer bought their coffee that morning and provide walking directions back to the cafΓ©, while creating a calendar entry and inviting a friend to meet there later in the day. Google announced Gentle Monster and Warby Parker will be the first partners to launch glasses with Android XR - with Samsung following on after. This was a convincing smart glasses display from Google, and it feels like the technology is now within touching distance. We hope Google will share more later this year, as we're still waiting on a release date for the first set of specs.
[6]
All the Biggest News and Features Announced During Google I/O 2025
It should have been obvious that Google I/O 2025 would be jammed-packed, considering the company felt the need to hold a separate event to cover all of its Android news. But color me shocked that Google pulled off a nearly two hour-long presentation, full of announcements and reveals, mostly about AI. Not all AI announcements are equal, of course. Some of the news was geared towards enterprise users, and some towards developers. But many of the features discussed are on their way to consumers' devices too, some as soon as today. These are the updates I'm going to focus on here -- you can expect to try out these features today, in the coming weeks, or at some point in the near future. Gemini Live is coming to the iPhone Earlier this year, Google rolled out Gemini Live for all Android users via the Gemini app. The feature lets you share your camera feed or screen with Gemini, so it can help answer questions about what you're seeing. As of today, Google is now bringing the feature out to iPhones with the Gemini app as well. As long as you have the app, you can share your camera and screen with the AI, no matter what platform you're on. AI Mode is the future of Google Search Google has been testing AI Mode in Search since March. The feature essentially turns Google Search into more of a Gemini experience, allowing you to stack multiple questions into one complex request. According to Google, it's AI can handle breaking down your query and searching the web for the most relevant sources. The result, in theory, is a complete report answering all aspects of your search, including links to sources and images. AI Mode is rolling out for all users -- not just testers -- over the coming weeks. But it's not just the AI Mode experience that Google has been testing. The company also announced new AI Mode features at I/O. Cram multiple searches into one First, there's Deep Search, which multiplies the number of searches AI Mode typically would make for your query and generates an "expert-level fully-cited report" for you. I would still fact check it thoroughly, seeing as AI has a habit of hallucinating. AI Mode is also getting Gemini Live access, so you can share your screen or camera in Search. Use "Agent Mode" as a real world personal assistant Project Mariner is also coming to AI Mode. Google says you'll have access to "agentic capabilities," which basically means you can rely on the AI to complete tasks for you. For example, you'll be able to ask AI Mode to find you "affordable tickets for this Saturday's Reds game in the lower level," and not only will the bot do the searching for you, it'll fill out the necessary forms. Google says that functionality will apply to event tickets, restaurant reservations, and local appointments. You can see that in action with Agent Mode, which will theoretically be able to execute complex tasks on your behalf. We don't know a lot about how that will work yet, but we do have a clear example from the Google I/O stage. During the presentation, Alphabet CEO Sundar Pichai tasked Gemini's Agent Mode with finding an apartment with in-unit laundry, keeping to a certain budget. Gemini then got to work, opening the browser, pulling up Zillow, searching for apartments, and booking a tour. AI Mode will pull from your previous search history in order to deliver you more relevant results. That includes results that apply to your whereabouts -- say, local recommendations for an upcoming trip -- as well as preferences (if you tend to book outdoor dining spots, AI Mode may recommend outdoor dining when you ask to find dinner reservations). One of the features Google focused on most was Personalized smart replies in Gmail. While Gmail has an AI-powered smart reply feature already, this one goes a step further, and bases its responses on all of your Google data. The goal is to generate a reply that sounds like you wrote it, and includes all the questions or comments you might reasonably have for the email in question. In practice, I'm not sure why I'd want to let AI do all of my communicating for me, but the feature will be available later this year, and for paid subscribers first. If you use Google Meet with a paid plan, expect to see live speech translation start to roll out today. The feature automatically dubs over speakers on a call in a target language, like an instant universal translator. Let's say you speak English and your meeting partner speaks Spanish: You hear them begin to speak in Spanish, before an AI voice takes over with the English translation. 'Try it on' Google doesn't want you returning the clothes you order online anymore. The company announced a new feature called "try it on" that uses AI to show you what you'd look like wearing whatever clothing item you're thinking about buying. This isn't a mere concept, either: Google is rolling out "try it on" today to Google Search lab users. If you want to learn more about the feature and how to use it, check out our full guide. Android XR As the rumors suggested, Google talked a bit about Android XR, the company's software experience for glasses and headsets. Most of the news it shared was previously announced, but we did see some interesting features in action. For example when using one of the future glasses with Android XR built in, you'll be able to access a subtle HUD that can show you everything from photos to messages to Google Maps. (Personally, the main draw here for me would be AR Google Maps while walking around a new city.) On stage, we also saw a live demo of speech translation, which Android XR overlaying an English translation on screen as two presenters spoke in different languages. While there's no true timeline on when you can try Android XR, Google's big news is that it is working with both Warby Parker and Gentle Monster on making glasses with the service built in. Veo 3, Imagen 4, and Flow Google unveiled two new AI generation models at I/O this year: Imagen 4 (images) and Veo 3 (video). Imagen 4 now generates higher-quality images with more detail than Imagen 3, Google's previous image generation model. However, the company specifically noted Imagen 4's improvements with text generation. If you ask the model to generate a poster, for example, Google says that the text will be both accurate to the request, as well as stylistic. Google kicked off the show with videos generated by Veo 3, so it's safe to say the company is quite proud of its video generation model. While the results are crisp, colorful, and occasionally jam-packed with elements, it definitely still suffers from the usual quirks and issues with AI-generated video. But the bigger story here is "Flow," Google's new AI video editor. Flow uses Veo 3 to generate videos, which you can then assemble like any oother non-linear editor. You can use Imagen 4 to generate an element you want in a shot, then ask Flow to add it to the next clip. In addition to the ability to cut or expand a shot, you can control the camera movement of each shot independently. It's the most "impressive" this tech has seemed to me, but outside of a high-tech story board, I can't imagine the use for this. Maybe I'm in the minority, but I certainly don't want to watch AI-generated videos, even if they are created via tools similar to the ones human video creators use. Veo 3 is only available to Google AI Ultra subscribers, though Flow is available in limited capacity with Veo 2 to AI Pro subscribers. Two new Chrome features Chrome users can look forward to two new features following Google I/O. First, Google is bringing Gemini directly to the browser -- no need to open the Gemini site. Second, Chrome can now update your old passwords on your behalf. This feature is launching later this year, though you'll need to wait for the websites themselves to offer support. A new way to pay for AI Finally, Google is offering new subscriptions to access its AI features. Google AI Premium is now AI Pro, and remains largely the same, minus the new ability to access Flow and Gemini in Chrome. It still costs $20 per month. The new subscription is Google AI Ultra, which costs a whopping $250 a month. For that price, you get everything in Google AI Pro, but with the highest limits for all of the AI models, including Gemini, Flow, Whisk, and NotebookLM. You get access to Gemini 2.5 Pro Deep Think (the company's newest and most advanced reasoning model), Veo 3, Project Mariner, YouTube Premium, and 30TB of cloud storage. What a deal.
[7]
Seven New Gemini Features Google Announced at I/O 2025
Google I/O's 2025 keynote could have more reasonably been called The Google AI Show. Almost everything the company talked about was AI-powered, some of which is promised to arrive in the future, and some of which is available today. Features were spread across Google's whole range of products, but here are some of the ones you're actually likely to see. It's tough to talk about Gemini because it simultaneously refers to a set of models (like Gemini Flash, Gemini Pro, and Gemini Pro Deep Research), different versions of those models (the latest seems to be 2.5 for most of these), and different apps that these models are available through. There's the dedicated Gemini app, the voice assistant in things like Pixel phones and watches, as well as Gemini tools built into apps like Google Docs, Gmail, or Search. I'll do my best to specify which features are coming to what products, but keep in mind that sometimes Google tends to announce the same thing a few times. Agent Mode is coming to Gemini, Search, and more The Gemini app is getting a new Agent Mode that can perform tasks for you while you do something else. Google showed off an example of asking Gemini to find apartments in a city. The app then searches listings online, filters them by the criteria you set, and can offer to set up apartment tours for you. The most interesting aspect of this is that Google pitches this as a task you can have Gemini repeat regularly. So, for example, if you want Gemini to search for new apartments every week, the app can repeat the process, continuing with the information in previous iterations of the search. Agent Mode is similarly coming to Google Search for certain requests. Google uses the example of asking for tickets to an upcoming event. Google scours ticket listing sites, cross-references against your preferences, and presents the results. Gmail will pretend to be you when it replies to your emails Gmail has had smart replies for a while, but they can sound pretty generic (without intervention, anyway). It's a dead giveaway to your recipient that you're not really paying attention. To help you get away with quietly ghosting your friends, Gmail will soon be able to tailor its responses to you by referring to your past emails and even Drive documents. Google uses the example of a friend asking how you planned your recent vacation, a common thing we all email each other all the time. In this case, Gmail can draft a response based on your email history, with the advice you would be likely to give, and even write it how the AI thinks you would write it. Thought summaries will summarize how AI summarizes its thought process Yes, you read that right. AI "reasoning" models typically work by taking your query, generating text that breaks it down into smaller parts, sending those parts to the AI again, then carrying out each step. That's a lot of instructions happening behind the scenes on your behalf. Usually, reasoning models (including Gemini) will have a little drop down to show you the steps it took in the interim. If even that is too much reading for you, Gemini will now summarize the summary of the thought process. In theory, this is to make it easier to understand why Gemini arrived at the answers it gives you. Native audio output will whisper to you (in your nightmares) This is technically a new feature of the Gemini API, which means developers can build on these tools in their apps. Native audio output will let developers generate natural-sounding speech. In its demo, Google showed off voices that could switch between multiple languages, which was pretty cool. What isn't so cool, however, is the model can also whisper. I do not yet know what the practical use-cases are for an AI-generated voice that can whisper, but I do know I won't be able to get it out of my head for a week. At best. Jules will fix your code's bugs in the background while you work Last year, Google announced Jules, a coding agent that can help you with your code, similar to Github's Copilot. Now, the public beta of Jules is available. Google says Jules can fix bugs while you're working on other tasks, bump dependency versions, and even provide an audio summary of the changes that it's made to your code. Google Search will let you virtually try on clothes while shopping online I'm not great at visualizing what a piece of clothing will look like on my particular body, so this new try-on feature might actually be useful. Google is launching a Search Labs experiment that lets you upload a full-length photo of yourself that Google will alter to show what the clothing will look like on you. The company is also integrating shopping tools that can buy items for you and even track for the best price. It will then be able to buy stuff for you via Google Pay, using your saved payment and shipping info. This one isn't available quite yet, and frankly we'd want to learn a little more about how the process works and how to prevent purchases you don't want before we'd recommend using it. New Veo and Imagen models will generate audio and video Video is, definitionally, a series of images played at a fast enough speed to convey a sense of motion. With that definition, I can confidently say that the demos of Google's new Veo 3 model does, in fact, show video. Whether that video is any good is in the eye of the beholder, I suppose. Google seems to be betting on users finding the video generated by Veo 3 (and, by association, the images from Imagen 4) to be worthwhile, because the company is also building a video editing suite around it. Flow is a video editing tool that ostensibly lets editors extend and re-generate clips to get the right look. Google also says that Veo 3 can generate sounds to go along with its video. For example, in the owl scene linked above, Veo also generates forest sound effects. We'll have to see how it generates these elements (can you edit individual sounds distinctly, for example?) but for now the demos speak for themselves. Veo 3 is now available in the Gemini app for Ultra subscribers.
[8]
Google Goes After Apple, OpenAI and Meta With New AI Products | AIM
At its annual Google I/O developer conference in California, Google made it clear that AI is now central to everything it builds. From search and email to glasses and satellites, the company unveiled a sweeping range of AI-driven updates across its ecosystem, mainly driven by advancements in its Gemini model family. The tech giant officially replaced the Google Assistant with Gemini 2.5, which now acts as the intelligence layer across productivity tools, cameras, and more. A standout feature, Gemini Live, combines the camera, voice, and web access to deliver real-time, contextual answers -- an evolution of last year's Project Astra. Gmail also sees deeper integration, with Personalised Smart Replies allowing users to generate more natural responses. CEO Sundar Pichai said the feature even helps him respond to friends he might otherwise ignore, calling it "a way to be a better friend." Google rebranded its AI subscriptions. The $20/month AI Premium plan is now AI Pro, while a new top-tier AI Ultra plan launches at $250/month, exceeding OpenAI's $200 ChatGPT Pro offering. Under the hood, Gemini 2.5 Pro now leads in benchmarks like WebDev Arena and LMArena. The model has been enhanced with LearnLM for education-focused use cases and a new Deep Think experimental mode that enables advanced reasoning on complex tasks such as USAMO and MMMU. For developers, Google added thought summaries for easier debugging, thinking budgets to balance latency and cost, and new SDK support for open-source agent frameworks via MCP. Google introduced Gemma 3n, a mobile-first model optimised for phones, tablets, and laptops, developed in collaboration with Qualcomm and Samsung. Available now in early preview, it will soon integrate with Gemini Nano across Android and Chrome. A new coding agent named Jules can transform rough sketches into code or designs, expanding Google's AI-for-coding capabilities beyond what OpenAI's Codex or Cognition's Devin currently offer. In creative tools, Imagen 4 enables photorealistic image generation, while Flow lets users type in scenes and characters to create AI-generated video clips. Meanwhile, Veo 3 adds realism and physics-aware animation to AI videos. Search now features an AI Mode tab -- essentially a chatbot embedded in search -- to assist with complex queries. Google is also working on mixed-reality glasses under the Android XR umbrella, showing off floating text, AR maps, and translations during I/O. It has partnered with Gentle Monster and Warby Parker, while Samsung's Project Moohan headset is slated for release later this year. Pichai closed the event with two real-world AI initiatives. Fire Sat, an upcoming satellite network, will help detect wildfires early. Wing, Google's drone delivery service, was used to deliver supplies during Hurricane Helen and continues to expand its capabilities.
[9]
Here's what happened at Google's I/O 2025
Google is once again upping the ante when it comes to its AI-infused products and services. It shouldn't come as a surprise when I say that Google went all in on AI - again - at this year's I/O developer conference. Businesses worldwide are launching AI products and services almost on a weekly basis as the tech cements itself as an integral part of our interaction with technology. And companies like Google and OpenAI are at its forefront. At the conference yesterday (20 May), Google announced that it is adding its Gemini AI assistant to Chrome, with the ability to clarify complex information and summarise content on a webpage. The tech giant is also testing out a feature that lets users try outfits on virtually. The feature is a "first-of-its-kind" said Google, allowing users to choose between billions of options to try out. The generation model for fashion "understands the human body and nuances of clothing," it claims. And there's so much more. From AI-infused everything to prototype augmented reality (AR) glasses, here's a rundown some more of the big announcements Google made. Competitor to Sora Veo 3 is an AI video generator that can create and incorporate AI-generated audio. Accepting both text and image prompts, the generator outputs in 4k and lets users add sound effects, ambient noise and dialogue. The AI tool competes with OpenAI's Sora, launched late last year, but one-ups the tool with its audio capabilities. Yesterday, Google also launched Flow, an AI filmmaking tool made using Veo, Imagen and Gemini. Veo offers "state-of-the-art" generative tools for video, while Gemini makes prompting intuitive and Images offers text-to-image capabilities, the tech giant explained in its blog post. While still in "early days", Flow allows film makers to control shots and camera angles for its AI-generated scenes and build and edit existing shots. Upgrades to existing projects First unveiled at last year's I/O conference, Project Astra was demonstrated using smart glasses to showcase the capabilities of real-time multimodal AI. Now, Astra will provide new experiences in Search and Gemini. With the AI-mode, users can ask questions about what they are seeing through the smartphone's camera. Astra streams the live video and audio into an AI model and responds to their questions. While Project Mariner, also launched last year, is getting its own set of upgrades. The experimental AI agent that browses and uses websites, will now be brought to Gemini API and Vertex AI, all in Google's growing attempts to incorporate AI tools as a vital part of our interaction with tech. Meanwhile, Google hasn't announced any launch details for its Project Astra AR glasses yet. Although, some reporters who tried out the Gemini-powered glasses say the company "might actually pull AR glasses off". Sniffing out AI If it's not clear already, AI has seeped into most of the ways we create content, thanks in a large part to many of these tech giants. And as generative AI tools get more advanced, it becomes harder and harder to detect its usage in content. As a result, Google launched SynthID Detector, a tool that embeds "imperceptible" watermarks on content made with Google AI, enabling the portal to detect AI across different modalities, including image, video, audio file or text. According to the company, more than 10bn pieces of content have already been watermarked using SynthID. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[10]
Google I/O 2025 in a nutshell
Google hosted its annual I/O developer conference at Mountain View's Shoreline Amphitheatre, unveiling a slew of updates across its portfolio, including Android, Chrome, Google Search, YouTube, and its AI-powered chatbot, Gemini, over two days starting Tuesday. Google announced Gemini Ultra, a premium subscription ($249.99/month, U.S.-only for now) offering unparalleled access to its AI apps and services. Subscribers gain access to Veo 3, a video generator capable of creating sound effects, dialogue, and high-quality footage, surpassing its Veo 2 predecessor. Additionally, Gemini Ultra includes Flow, a video editing app; Gemini 2.5 Pro Deep Think mode, an enhanced reasoning capability (currently in trusted tester phase awaiting safety evaluations); higher limits on NotebookLM and Whisk; agentic tools via Project Mariner; YouTube Premium; and 30TB of storage across Google Drive, Photos, and Gmail. The Gemini app now has over 400 million monthly active users. Updates include the rollout of Gemini Live's camera and screen-sharing features (powered by Project Astra) to all iOS and Android users this week, enabling near-real-time conversations and content streaming. Future integrations will include Google Maps directions, Google Calendar events, and Google Tasks to-do lists. Deep Research, Gemini's AI agent, will soon support private PDF and image uploads. Imagen 4, the latest AI image generator, boasts faster performance than Imagen 3, with a forthcoming variant promising up to 10x speed increase. It excels at rendering fine details (e.g., fabrics, water droplets) in both photorealistic and abstract styles up to 2K resolution. Both Veo 3 and Imagen 4 will power Flow, aiding filmmaking. Stitch, an AI tool for designing web and mobile app front-ends, generates UI elements and code from text or image prompts, offering customization options despite initial limitations. Access to Jules, an AI agent for debugging code, has been expanded. Project Mariner, Google's experimental AI agent, has been significantly updated to handle nearly a dozen concurrent tasks and is now available to users. It can perform actions like purchasing tickets or groceries without requiring users to visit third-party websites. Video: Google Project Astra, a low-latency multimodal AI experience, will enhance Search, Gemini, and third-party products. Collaborations with Samsung and Warby Parker are underway for Project Astra glasses, though no launch date is set. AI Mode, an experimental Google Search feature for complex queries, rolls out to U.S. users this week, supporting sports, finance, and "try it on" apparel features. Search Live, coming this summer, will enable real-time camera-based queries. Gmail is the first app to integrate personalized context. Beam (formerly Starline), a 3D teleconferencing system, offers "near-perfect" millimeter-level head tracking and 60fps video streaming, along with AI-powered real-time speech translation in Google Meet. Additional updates include Gemini's integration into Chrome, offering an AI browsing assistant; Gemma 3n, a model for seamless performance across devices (available in preview); and numerous AI Workspace features for Gmail (personalized smart replies, inbox cleaning), Google Docs, and Google Vids (content creation/editing tools). Wear OS 6 introduces a unified font for tiles, dynamic theming for Pixel Watches, and a design reference platform with guidelines and Figma files for developers. Google Play enhancements for Android developers include tools for managing subscriptions (with multi-product checkout and add-ons), topic pages for specific interests, audio samples, and streamlined checkout processes. Developers also gain dedicated testing/release pages and the ability to halt problematic live app releases. Android Studio incorporates AI features like "Journeys" (agentic AI with Gemini 2.5 Pro) and "Agent Mode" for intricate development tasks. An enhanced "crash insights" feature analyzes source code to identify and suggest fixes for app crashes.
[11]
Google I/O 2025: Here Are All the Major AI Announcements
AI Mode in Search is getting several shopping-focused features Google I/O 2025 developer conference's keynote on Tuesday was a packed one. During the session, company CEO Sundar Pichai and other executives announced a plethora of new artificial intelligence (AI) updates and features. Some of these include new capabilities in the Gemini 2.5 series of AI models, updates to AI Mode in Search, expansion of AI Overviews, introduction of the new 3D communication platform Google Beam, and a demonstration of the Android XR platform. In case you did not catch the event live, here's a quick roundup of everything that was announced. The tech giant's Project Starline is now being introduced as Google Beam, a 3D communications platform. It uses an array of six cameras to capture a video stream of the user from different angles. Then, an AI system combines them to turn the 2D feed into a 3D light field display. The company is also using head-tracking sensors to accurately capture the user at 60 frames per second (fps). Google is working with HP to introduce the first Google Beam devices later this year. The initial devices will only be provided to select customers. Additionally, Google Beam products from original equipment manufacturers (OEMs) will be made available via InfoComm 2025, which is set for June. Gemini 2.5 series is also getting several new capabilities. A new Deep Think mode is being added to the 2.5 Pro model, which is being called an enhanced reasoning mode. The feature is currently under testing. Native Audio Output, an expressive and human-like speech generation capability is also being added to Gemini 2.5 models via the Live application programming interface (API). Google is also updating the Gemini 2.5 Flash model with improved capabilities across reasoning, multimodality, code and long context. The model will also be more cost-effective to use. Developers using the Gemini API will also get thought summaries and thinking budget with the latest models. Another major talking point from the keynote session was the AI Mode in Search. Google is now planning to power the end-to-end AI search with a custom Gemini 2.5 model. The AI Mode is also getting a new Deep Search mode, a Live Search feature that lets the AI tool access the camera of a device, and a new agentic feature to let users purchase event tickets and book an appointment directly from the interface. AI Mode in Search is also getting new shopping-focused features. Users will now get to visually search for the product they want, try out a wide selection of apparel virtually just by uploading a picture of themselves, and use AI agents to track prices of products and make purchases automatically. These features will be added later this year. The Mountain View-based tech giant announced the expansion of AI Overviews during the keynote. The AI-powered search result snapshot feature will now be available in more than 200 countries and over 40 languages. With this update, it will support Arabic, Chinese, Malay, and Urdu, which join existing language options such as English, Hindi, Indonesian, Japanese, Portuguese, and Spanish. During the keynote session, Google also showcased a demo of its new Gemini-powered Android XR platform. It will be the operating system for Samsung's upcoming Project Moohan smart glasses. The company is also working with other wearable partners. These Android XR smart glasses will feature a camera, microphone, speakers, and a display screen over the glasses. Users will be able to have hands-free conversations with Gemini, ask it to capture images, control their smartphone and other connected devices, and more. Google also unveiled the next generation of its image generation model, Imagen 4 and video generation model Veo 3. Imagen 4 now comes with improved text rendering and contextual understanding of text placement, as well as improved image quality and prompt adherence. With Veo 3, the company is adding native audio generation capability, which means the generated videos will now feature ambient sounds, background music, as well as dialogues. Both of these models will be released to the public later this year. The company is also launching a new AI-powered filmmaking app dubbed Flow. It leverages Imagen, Veo, and Gemini to generate eight-second-long video clips. Multiple clips can also be stitched together to create a longer scene. The app accepts both text and images as prompts. Paid subscribers will now be able to access the Gemini AI assistant within Google Chrome. A new Gemini button will let users summarise a web page or ask questions about the content. It can also navigate websites automatically, based on user prompts. The AI assistant can also work on multiple tabs at the same time. Google also unveiled a new AI-powered tool that can generate app interfaces based on text prompts and templates. Dubbed Stitch, the app also supports wireframes, rough sketches, and screenshots of other user interface (UI) designs. It is currently available as an experiment via Google Labs. The company is also adding a new AI feature to Google Meet. The video conferencing platform will now support real-time speech translation and can help speakers with different native languages converse with minor lag. Currently, the feature can translate English and Spanish. It is currently available to paid subscribers in beta. Finally, the tech giant also introduced the Google AI and Google AI Ultra plans for its suite of Gemini features. The former replaces the Google One AI Premium plan and will be available for $19.99 (Rs. 1,950 in India) per month, while the Google AI Ultra plan will cost $249.99 (roughly Rs. 21,000) a month. The latter will get all the new features first, offer higher rate limits, and provide 30TB of cloud storage.
[12]
10 announcements from Google I/O 2025 I'm most excited about | Stuff
After the biggest news from Google I/O 2025? We've got you covered. Having tuned into Google's big event and watched the flood of AI announcements cascade in, one thing's clear - Google is positioning itself as an AI brand now. From smarter search to virtual try-ons and glasses that whisper directions in your ear, here are the best 10 announcements in bite-sized format. Smart glasses have flirted with AI before (I'm looking at you Meta Ray-Bans), but Google's latest Android XR push - now paired with Gemini - might actually make them useful enough to wear all the time. Running on a new Android XR platform, these glasses (developed with partners like Gentle Monster and Warby Parker) blend AI assistance with surprisingly wearable designs. They see and hear what you do, serve up helpful suggestions to an optional in-lens display, and keep your hands free while navigating your day. Whether you're translating conversations in real time, firing off a text, or snapping photos with a blink, it's like having a helpful assistant on your face. Even better, Xreal is jumping into the mix too, with plans to bring its own glasses into the Android XR ecosystem. Expect a wave of Gemini-powered headsets and wearables later this year, starting with Samsung's Project Moohan. Video making just got democratised in a big way. And I'm also slightly terrified. Veo 3 is Google's newest generative video model and it doesn't just render stunning 1080p scenes - it adds sound, too. We're talking street ambience, background music, and even believable character dialogue, all from a prompt. To go with it, Google launched Flow, a new AI filmmaking tool purpose-built for creatives. You can storyboard, manage assets like characters and props, and sequence scenes with cinematic polish, all by describing your ideas. There are even camera controls, continuity features, and reference styles to keep everything visually coherent. It's available now for Google AI Pro and Ultra users in the US. It's a sad state of affairs when we get excited that an image generator can finally spell properly - but here we are. Imagen 4 isn't just about better textures and photorealism (although it's very good at both) - it also gets typography right. Posters, comics, and slides should all be useable now. No more garbled nonsense text that makes your creations look like a ransom note. It's fast, flexible with aspect ratios, and supports resolutions up to 2K, making it ideal for everything from Instagram flexing to full-blown print layouts. Imagen 4 is now live in the Gemini app and Workspace apps like Docs and Slides. Google's take on the future of software development isn't a sidekick. Jules is a full-blown autonomous coding agent that plugs into your existing repos, clones your project into a secure VM, and just... gets to work. It writes new features, fixes bugs, updates dependencies, and even narrates the changes with an audio changelog. I absolutely love that last part. You can watch its reasoning, edit its plan on the fly, and stay in control without doing the actual slog. It's powered by Gemini 2.5 Pro and available now in public beta globally, wherever Gemini is available. We've all had those moments where describing the problem feels harder than fixing it. Gemini Live now lets you point your camera at whatever's giving you grief - be it a form you don't understand or a baffling piece of IKEA furniture - and talk it through. With camera and screen sharing now available for free on Android and iOS, it's already becoming an easy way to get help to your questions. Gemini Live will soon integrate with Google Maps, Calendar, Tasks and Keep too, meaning you can show it your dinner plan chaos and have it suggest a time, a place, and actually create the event. Google Search is now less "here are some results" and more "here's your answer and I bought the tickets." AI Mode is rolling out in the US with advanced reasoning, a multimodal interface, and the ability to follow up like an attentive conversation partner. It can interpret long, detailed questions, and even handle real-world interactions - like analysing ticket listings or booking appointments. You can also shop smarter, with a visual browsing experience, virtual try-ons using your own photo, and an agentic checkout that'll buy your item when the price dips. Obviously, this is what Google sees as the future of search. While some of these features definitely seem useful, I'm not sure I'm sold on using them all the time. Fortunately, AI Mode exists alongside regular Search. For now, at least. But if the God-awful AI Overviews are anything to go by, Google will transition this to the default in the near future. Though, Google says people are actually using AI Overviews. So maybe it does know best. Gemini 2.5 Pro is already a monster of an AI model, but now it's getting an experimental mode called Deep Think. Designed for tasks that require actual reasoning - like solving complex maths or competitive coding problems - it uses new techniques to consider multiple solutions before deciding what to say. It's been tested on brutal academic benchmarks and is currently reserved for trusted testers, but the results so far are ridiculously impressive. Google is finally putting all that data it's quietly been collecting - sorry, respectfully managing - to actual good use. With your permission, Gemini can pull in personal context from Gmail, Drive, Calendar, and more to provide answers that actually reflect your life. New Smart Replies in Gmail promise to match your tone and include info from old itineraries or past messages. Deep Research now lets you add your own documents for richer insights, and Canvas lets you turn those insights into apps, visuals, even podcasts. It's personalisation that actually feels useful, not just creepy. Although the personalised replies I've used in Superhuman haven't been all that helpful, so hopefully Google does a better job. We've all been on too many soul-sucking video calls that leave us staring at a pixelated freeze-frame of our own disappointment at this point. Beam wants to change that. Born from the now-retired Project Starline, it's a new 3D video platform that uses 6 cameras and real-time rendering to make it feel like you're actually in the room with someone. Facial expressions, eye contact, and body language all gets captured and displayed with millimetre-precise head tracking on a 3D lightfield screen. Think Apple's Personas from the Vision Pro headset. The first Beam devices are coming later this year in partnership with HP. And while it's not quite a hologram yet, it does put us one step closer. Which is undeniably cool. Online shopping is equal parts convenience and chaos. But Google's new AI Mode makes it feel more like chatting with a knowledgeable shop assistant. Say you're looking for a bag that'll hold up in rainy weather. AI Mode fans out multiple searches, checks waterproofing, capacity, and brand ratings, then shows you a visual panel of curated suggestions. It can bring in Personal Context, so if you're shopping for dog or kid toys, it'll know their name. But the best part has to be the fact that you can now try clothes on virtually using a photo of yourself. A number of small startups have been working on this problem, but now it's baked right into Google. The fitting rooms are the worst part of going shopping (and there are many), so this makes things more convenient than ever. And when you're ready to buy, an agentic checkout will handle it via Google Pay. It's live in Search Labs in the US today and will roll out to more users soon. If there's one theme from Google I/O 2025, it's that the search giant doubling down on making AI useful, not just smart. With so many of these tools already live or landing soon, it's clear Google is done teasing and ready to deliver. In fairness, some of Google's newest announcements are undeniably impressive. But AI fatigue is definitely setting in. And I can see a real possibility where Google Search gets ruined (even more) in the near future. So watch this space for whatever comes next.
[13]
Key takeaways from Google I/O 2025: Gemini, Search in focus
AI Mode also introduces deeper personalisation by incorporating signals from users' Gmail accounts and previous searches. It is now rolling out across the United States, with global expansion expected later. Gemini is also being integrated across Google's apps. Users will soon be able to get directions, set reminders, or interact with Gmail and Calendar through natural conversations.At its flagship Google I/O 2025 keynote on May 20, Alphabet's Google showcased sweeping upgrades across its artificial intelligence (AI) offerings, underlining a strategic shift to embed generative AI across its product ecosystem. Key announcements: Gemini update: Also Read: Pichai sees platform shift as AI brings tech research to life AI Mode in Search: New image and video models: Google unveiled two generative models: Imagen 4 for text-to-image generation and Veo 3 for video. Imagen 4 delivers higher visual fidelity and improved text rendering. Veo 3 adds smoother motion and AI-generated soundtracks for more realistic video output. Beam brings 3D video calls: Beam, Google's new 3D video conferencing system, uses a six-camera array and a custom light field display to create lifelike remote meetings. Google is partnering with HP and Zoom to bring Beam to enterprise users, with devices expected to launch soon. Smart glasses on Android XR: The tech major has introduced Android XR, an extended reality operating system, to support mixed reality on headsets and glasses. In a demo, smart glasses handled real-time translation, navigation and contextual assistance. Partners, including eyewear makers Warby Parker and Gentle Monster, are developing XR-enabled products.
[14]
Google I/O Recap 2025: How AI (and Gemini) Will Redefine Your Digital Life
Google's 2025 Developer Conference, Google IO, unveiled a series of new innovations that promise to reshape the digital landscape. From advancements in artificial intelligence (AI) to breakthroughs in augmented reality (AR) and enhanced app integration, the event underscored Google's commitment to creating a smarter, more interconnected future. These announcements highlight how technology is evolving to meet your needs in increasingly intuitive and practical ways. The video below from MacRumors gives us more details. One of the most notable announcements was Gemini Live, a innovative AI-powered suite designed to provide real-time assistance. By using your device's camera and screen-sharing capabilities, Gemini Live offers practical solutions that integrate seamlessly into your daily life. What sets Gemini Live apart is its deep integration with Google apps such as Calendar, Maps, Tasks, and Keep. Whether you're organizing your day, navigating a new city, or managing a to-do list, this tool provides context-aware support tailored to your specific needs. Its ability to adapt to your activities ensures that it remains a valuable companion in both personal and professional settings. Google introduced new subscription models aimed at expanding access to its advanced AI tools. The AI Ultra plan, priced at $250 per month, offers premium features including YouTube Premium access and an impressive 30TB of cloud storage. For users seeking a more budget-friendly option, the rebranded Google AI Pro plan is available for $19.99 per month. A standout feature of these plans is VO3, a platform designed for creating AI-generated videos. With capabilities that include advanced visuals, audio, and dialogue generation, VO3 is ideal for content creators and businesses looking to produce professional-quality videos efficiently. This tool represents a significant step forward in simplifying video production while maintaining high standards of quality. Google Search has been significantly enhanced with the integration of the Gemini 2.5 AI model, offering a more intelligent and personalized search experience. The introduction of multi-query searches allows you to explore complex topics in a single session. For example, you can simultaneously research travel destinations, compare flight options, and book accommodations, all within one seamless process. The AI-powered search also adapts to your preferences by analyzing your activity within Google apps. This personalization ensures that the results are tailored to your interests and needs. Additionally, agentic AI capabilities enable you to complete tasks such as purchasing event tickets or making reservations directly from the search interface, streamlining your online interactions and saving you time. Gemini AI has been deeply integrated into Google apps, enhancing their functionality with context-aware intelligence. In Gmail, for instance, the AI can draft emails that reflect your tone and writing style, helping you save time while maintaining a personal touch. This feature is particularly useful for professionals managing high volumes of communication. Chrome users benefit from tools that summarize lengthy articles and provide answers to questions about open tabs, making web browsing more efficient. Additionally, a new password auto-update feature in Chrome enhances security by automatically updating compromised credentials, making sure your online accounts remain protected. Among the most anticipated announcements was the launch of Android XR glasses, which represent a significant leap forward in augmented reality technology. These glasses offer advanced features such as live translation, turn-by-turn navigation, and image recognition, providing real-time insights and assistance in various scenarios. Equipped with cameras, microphones, and speakers, the glasses integrate seamlessly with Gemini AI to deliver a highly interactive experience. Google has partnered with brands like Warby Parker and Gentle Monster to ensure the glasses are not only functional but also stylish, blending innovative technology with modern design. This combination of practicality and aesthetics makes the Android XR glasses a compelling addition to the AR market. Google is emphasizing inclusivity by making sure its new features are compatible across multiple platforms, including iOS and Android. This cross-platform approach allows you to benefit from Google's innovations without being confined to its ecosystem. By prioritizing accessibility, Google is making its tools more versatile and widely available, making sure that users across different devices can enjoy a seamless experience. Google IO 2025 showcased a bold vision for the future, emphasizing advancements in AI, AR, and app integration. From the real-time capabilities of Gemini Live to the innovative Android XR glasses, these developments are set to transform how you interact with the digital world. By focusing on personalization, accessibility, and cross-platform compatibility, Google is pushing the boundaries of what technology can achieve, paving the way for a more intelligent and connected future. Take a look at other insightful guides from our broad collection that might capture your interest in Artificial Intelligence (AI).
[15]
New Google AI Features And Tools Unveiled from Google I/O 2025
What if your favorite Google tools suddenly became smarter, more intuitive, and capable of transforming the way you work, create, and connect? At Google I/O 2025 its latest wave of AI-driven innovations, Google is doing just that -- ushering in a new era where artificial intelligence seamlessly integrates into everyday life. From generative AI models that redefine creativity to real-time translation tools breaking down language barriers, these advancements aren't just incremental updates -- they're bold steps toward a future where technology feels less like a tool and more like an extension of your mind. Whether you're a casual user exploring AI for the first time or a developer seeking innovative solutions, Google's latest offerings promise to reshape how we interact with the digital world. AI Advantage provide more insights into the powerful tools and features Google has unveiled at Google I/O 2025 , offering a glimpse into how AI is transforming everything from content creation to task automation. You'll discover how the Gemini 2.5 Pro model sets a new benchmark for problem-solving, why the V3 video generation tool is a fantastic option for multimedia creators, and how AI-powered search is redefining how we find information. With flexible subscription plans and experimental tools like SynthID Detector and Notebook LM, Google is making advanced AI accessible to a broader audience while pushing the boundaries of what's possible. These innovations aren't just about convenience -- they're about empowering users to think bigger, create faster, and connect more deeply in a rapidly evolving digital landscape. To meet the diverse needs of its users, at I/O 2025 Google has introduced two distinct subscription tiers, each tailored to different levels of AI engagement: These subscription options empower users to scale their AI usage according to their specific needs, whether for personal exploration, professional tasks, or advanced development projects. At the heart of Google's AI advancements lies the Gemini 2.5 Pro model, a powerful tool engineered for complex reasoning and analysis. This model excels in handling tasks such as processing extensive datasets, solving intricate problems, and supporting activities like coding and advanced mathematics. By surpassing industry benchmarks, Gemini 2.5 Pro establishes itself as a new standard for AI-driven problem-solving, offering developers and researchers a robust resource for tackling sophisticated challenges. Its capabilities make it a cornerstone of Google's AI ecosystem, allowing users to achieve greater efficiency and precision in their work. Explore further guides and articles from our vast library that you may find relevant to your interests in generative AI. Google's V3 video generation model is transforming the landscape of multimedia creation by combining video and audio generation into a single, cohesive tool. This model delivers enhanced realism, improved physics, and advanced sound design, making it a fantastic option for content creators. Key features include: These features empower creators to produce high-quality, customized content with ease, opening up new possibilities for storytelling, marketing, and entertainment. The updated Imagen model takes text-to-image generation to unprecedented levels of realism and precision. Designed to adhere closely to user prompts, Imagen enables the creation of visually stunning and highly customized images. Comparable to leading models like GPT-4, Imagen is a valuable tool for designers, marketers, and content creators, helping them bring their ideas to life with exceptional clarity and detail. This advancement underscores Google's commitment to providing tools that enhance creativity and productivity across industries. Google has reimagined its search capabilities with AI-powered enhancements, creating a more intuitive and interactive experience for users. The new search interface offers conversational and modular results, allowing users to engage with information in a more dynamic way. Future updates may include features like custom app generation directly within search results, further expanding the platform's utility. Additionally, Google is transforming online shopping with tools such as virtual try-ons using 3D body modeling, providing a more engaging and personalized retail experience. Google Meet now features real-time translation capabilities, allowing seamless communication between languages such as Spanish and English. This innovation is particularly valuable for international collaboration, making sure that language differences no longer pose a barrier to effective communication. By facilitating smoother interactions across linguistic divides, this feature enhances productivity and fosters greater inclusivity in global settings. Project Mariner, an AI-powered computer agent, introduces sophisticated features for task automation. Key highlights include: However, access to Project Mariner is exclusive to Ultra Plan subscribers, emphasizing its premium positioning within Google's AI offerings. This tool represents a significant step forward in automating complex workflows, making it an invaluable resource for professionals and organizations. Google has introduced several tools designed to streamline development processes and enhance productivity: These tools aim to simplify the development process, allowing developers to focus on creativity and problem-solving while using the power of AI to handle routine or complex tasks. Google is embedding AI assistants directly into the Chrome browser, offering users real-time assistance and insights while browsing. This integration is expected to transform how users interact with web content, making the browsing experience more intuitive, efficient, and productive. By incorporating AI into everyday tools like Chrome, Google is making advanced technology accessible to a broader audience. Google Labs continues to push the boundaries of AI innovation with a range of experimental tools, including: These experimental tools highlight Google's commitment to exploring new frontiers in AI technology, making sure that its ecosystem remains at the cutting edge of innovation. In addition to its Google I/O 2025 headline features, Google has introduced several other noteworthy updates, including: These innovations demonstrate Google's dedication to creating versatile tools that address a wide range of applications, from research and development to e-commerce and beyond.
[16]
Google I/O 2025: New Gemini features, Imagen 4, Veo 3, and AI subscription plans
Google announced new updates to its AI products at the Google I/O 2025 event. These include improvements to the Gemini app, new models for video and image creation, and fresh tools for creators and developers. Beginning May 20, 2025, Gemini Live allows users to use their phone camera to interact with objects and talk about them in real time, with free camera and screen sharing available on both Android and iOS. Soon, Gemini Live will integrate with daily apps -- it can add events to Google Calendar or fetch local info like pizza options from Google Maps. Users can control app connections and data via settings. Free access is extended to university students in the U.S., Japan, Brazil, Indonesia, and the United Kingdom. Now offered in the U.S. with a 50% introductory discount for the first three months. Google has applied SynthID watermarks to over 10 billion AI-generated images, videos, audio, and text since 2023 to help track AI content and reduce misinformation. Now, SynthID Detector is available to verify if a file contains this watermark, helping users confirm AI origin.
[17]
Google I/O 2025: Here is every interesting announcement made last night
At Google I/O, the company offered a glimpse into the Android XR-powered glasses. Google hosted its much-awaited I/O 2025 keynote last night, and just like many expected, AI took centre stage. The company revealed a wide range of AI-powered tools and upgrades that are set to change the way we search, create content and even shop online. From smarter replies in Gmail to better AI image/video generators, Google is weaving AI into almost every product in its ecosystem. If you missed the event, don't worry. I've rounded up every major announcement that stood out during Google I/O 2025. Google has announced that starting this week, all users in the US will get access to AI Mode -- a new tab that lets you search the web using Google's Gemini AI chatbot. This summer, the company plans to test new features for AI Mode, including deep search capabilities and the option to create charts for finance and sports queries. Shopping through AI Mode will also be available in the coming months. The 3D video chat tool Project Starline now has a new name -- Google Beam. It will be available inside an HP-branded device with six cameras and a light field display to show lifelike 3D images. Google also unveiled Imagen 4, its latest AI text-to-image generator, which improves text generation and supports exporting images in more formats like square and landscape. Meanwhile, the tech giant's next-generation AI video generator, Veo 3, will enable users to create videos with synchronised sound. During its I/O event, Google also introduced Flow. Flow is a new AI filmmaking app which combines Veo, Imagen and Gemini to generate 8-second video clips. Also read: Google adds ChatGPT-like AI Mode to Search: How it will work Gemini is coming to Chrome for AI Pro and Ultra users. It can summarise pages and help you navigate websites. Gmail will soon offer smarter replies based on your inbox and tone. Search Live lets you point your phone camera at something and talk to Google AI about it in real time. Meanwhile, Project Astra can now speak based on what it sees, like pointing out mistakes in your homework. Google Meet is getting a new feature that can translate your speech into your conversation partner's preferred language almost instantly. For now, the feature supports only English and Spanish and is available in beta for Google AI Pro and Ultra subscribers. Google is experimenting with a new feature that lets you upload a full-length photo of yourself to preview how shirts, pants, dresses, or skirts would look on you. This feature uses an AI model designed to understand the human body and the subtle details of clothing. Google also plans to let AI handle shopping and checkout tasks for you. At Google I/O, the company offered a glimpse into the Android XR-powered glasses. The live demo showcased how these smart glasses could seamlessly fit into everyday life -- letting users message friends, book appointments, get turn-by-turn directions, snap photos and more. One standout moment was a live language translation between two people.
Share
Copy Link
Google's annual developer conference showcases a range of AI-powered innovations, including updates to Gemini, new AR/VR hardware, and AI-enhanced search and shopping experiences.
Google's annual developer conference, I/O 2025, placed artificial intelligence at the forefront of its product strategy, unveiling a slew of AI-powered innovations and updates across its ecosystem 123. The nearly two-hour keynote, led by CEO Sundar Pichai, emphasized Google's commitment to integrating AI into every aspect of its products and services.
Source: Silicon Republic
Google's AI model, Gemini, received significant attention with the introduction of Gemini 2.0 Pro, touted as the company's most powerful model yet 3. The updated AI assistant promises enhanced capabilities in coding, math, and creative tasks. Google also announced Gemini 2.0 Flash, a more affordable version, and Gemini 2.0 Pro Deep Think for complex problem-solving 3.
In a move that sparked debate, Google introduced two new AI subscription plans 24:
Google unveiled "AI Mode" for its search engine, designed to handle more complex queries and provide more comprehensive results 13. The new feature incorporates "query fan-out technique" to break down and process multi-part searches 3. Additionally, Google showcased AI-powered shopping features, including virtual try-ons and price tracking 23.
Google finally revealed its post-Google Glass plans for augmented and virtual reality 3. The company introduced Android XR, an operating system for both immersive headsets and smartglasses. Key announcements included:
Source: Stuff
The company introduced Google Beam, an AI-powered 3D video conferencing platform that creates realistic 3D models of participants without the need for special glasses 45. This technology aims to make remote meetings feel more like in-person interactions.
Google Flow, a new AI-powered video editing tool, was unveiled, combining the capabilities of Imagen 4 for image generation and Veo 3 for video creation 25. These tools promise to streamline the creative process for content creators and filmmakers.
While the innovations were met with excitement, some analysts and users expressed concerns about data privacy and the rapid integration of AI into Google's core products 4. Questions were raised about the transparency of data usage in AI training and the potential for misinformation in AI-generated search results.
As Google continues to push the boundaries of AI integration, the company faces the challenge of balancing innovation with user trust and ethical considerations. The announcements at I/O 2025 clearly position Google as a leader in AI technology, but the true impact of these advancements will be determined by user adoption and real-world performance in the coming months.
Google introduces AI Mode, a significant upgrade to its search engine that integrates advanced AI capabilities, promising a more conversational and intelligent search experience for users.
13 Sources
Technology
15 hrs ago
13 Sources
Technology
15 hrs ago
Google announces a significant expansion of its AI Overviews feature in Search, now available in over 200 countries and 40+ languages, with improved capabilities powered by Gemini 2.5.
3 Sources
Technology
6 hrs ago
3 Sources
Technology
6 hrs ago
Google introduces Flow, an AI-powered filmmaking tool that combines Veo 3, Imagen 4, and Gemini models to revolutionize video creation and editing.
6 Sources
Technology
6 hrs ago
6 Sources
Technology
6 hrs ago
Google commits up to $150 million to partner with Warby Parker in developing AI-powered smart glasses based on Android XR, aiming to launch their first line after 2025.
7 Sources
Technology
6 hrs ago
7 Sources
Technology
6 hrs ago
Elon Musk announces plans for xAI and Tesla to continue buying large quantities of GPUs from Nvidia and AMD, with a new facility near Memphis set to house 1 million GPUs.
4 Sources
Technology
7 hrs ago
4 Sources
Technology
7 hrs ago