Curated by THEOUTPOST
On Wed, 14 Aug, 8:01 AM UTC
12 Sources
[1]
Did Google just steal Apple's AI thunder, or has Tim Cook got an ace in his back pocket?
I'm sitting here basking in afterglow of all the new products Google just showed at its Made for Google event, where it launched a raft of new Google Pixel 9 phones, Google Pixel Watch 3 and Google Pixel Buds 2 Pro. While the new phones would normally be the stars of the show, this time they played second fiddle to the software that ran on them, specifically Gemini Live, it's all encompassing AI digital assistant that's baked right into the heart of Android 15. Google demonstrated the best AI feature I've seen so far - the ability to talk to your phone and have it answer back, as if it were a human being. This is a next-level digital assistant and Gemini Live stole the show, despite a cringe-worthy moment when the live demo failed spectacularly, twice. Live demos at Google product launches have a long history of going badly, so I'm surprised that Google still continues to do them. Gemini Live lets you do more than just talk, too. Take a picture of a poster for an event and then ask Gemini if you're free on that date to go and see the show, or take a picture of your fridge and ask it what you could cook with those ingredients, and it will come up with some helpful suggestions. Of course, nothing in the AI world ever seems to be quite finished. Gemini's most useful integration features are its ability to look in your Calendar and Gmail apps to answer really helpful digital assistant-style questions, like, "Can you remind me what Tim said about the bowling game next week?". But these features have a 'coming soon' label applied to them. I'm sure they will arrive soon, but AI is slowly gaining a reputation for rocky rollouts. For instance, Bloomberg has already cast doubts on Apple's ability to have Apple Intelligence fully functioning in time for the launch of the iPhone 16 this September. AI is still struggling to find a valid reason to exist on our devices, but thanks to Google Gemini, the main benefit of AI is getting clearer and clearer. It's all about fundamentally changing the way you interact with your phone. In comparison, the AI image manipulation features that both Apple and Google have showed off so far are impressive, but they feel a bit like gimmicks that you'll try once or twice then forget about; good for product demos, but not much else. When it comes down to who has the better AI, Apple or Google, we'll only know that once the iPhone 16 launches and we get to quiz Siri beyond its beta versions. We know that Google has now set the bar very high for natural conversation with an AI, thanks to Gemini Live, but the killer feature that Apple could beat it on is the price. Apple Intelligence will be free to everyone lucky enough to own an iPhone 15 Pro or better, or a Mac with an M-series processor. Gemini Live is for Gemini Advanced subscribers only, and that's $20 a month. So for once, Apple seems to be about to beat Google at its own game, by releasing a free service.
[2]
Gemini AI Gets a Boost on Pixel 9 and Pixel Buds Pro 2
Lisa joined CNET after more than 20 years as a reporter and editor. Career highlights include a 2020 story about problematic brand mascots, which preceded historic name changes, and going viral in 2021 after daring to ask, "Why are cans of cranberry sauce labeled upside-down?" She has interviewed celebrities like Serena Williams, Brian Cox and Tracee Ellis Ross. Anna Kendrick said her name sounds like a character from Beverly Hills, 90210. Rick Astley asked if she knew what Rickrolling was. She lives outside Atlanta with her son, two golden retrievers and two cats. Google's Gemini AI model was squarely in the spotlight Tuesday at the company's Made by Google event, where the tech giant also introduced a lineup of new Pixel 9 phones, along with a smartwatch and earbuds. The executives who took the stage mentioned Gemini 115 times over the course of the 80-minute presentation. That included mentions of the chatbot itself, as well as the following Gemini products: There were also repeated references to "the Gemini era." Case in point: "We're fully in the Gemini era, with AI infused into almost everything we're doing at Google, across our full tech stack," Rick Osterloh, senior vice president of platforms and devices at Google, said at the Mountain View, California, event. "It's all to bring you the most helpful AI." Google execs also talked up the theme of helpful AI as they highlighted how they think AI will change the way we use our devices. This comes as competitors like ChatGPT maker OpenAI also try to convince us to talk to chatbots and let AI do more of the heavy lifting in search and other daily activities, like checking dates on a calendar or messaging a friend. In Google's case, more-powerful devices mean we can do more with generative AI beyond our laptops and tablets. But, as Google's Dear Sydney ad mishap during the Paris Olympics demonstrated, there's still a gap between what we're willing to do with AI and what tech companies think we want from AI. While we got a preview of most of Tuesday's Gemini news at Google's I/O developer event in May, there were two new hardware-specific updates worth highlighting: Generative AI can yield impressive results in creating images and crafting emails, essays and other writing, but it requires a lot of power. A recent study found that generating a single image with an AI model uses as much energy as fully charging your phone. Typically, this is the kind of power you find in data centers. But when Pixel 8 devices came out in October, Google introduced its first AI-specific processor. This powerful silicon chip helps make on-device generative AI possible -- "on device" meaning the processing happens on your phone, not in a far-off and costly data center. The Tensor G3 processor was first. Now we have the Tensor G4, which was developed with Google's AI research laboratory DeepMind to help Gemini run on Pixel 9 devices and power everyday activities like shooting and streaming videos with less of a hit to the battery. Google calls Tensor G4 "our fastest, most powerful chip yet." According to Shenaz Zack, senior director of Pixel product management, that means 20% faster web browsing and 17% faster app launching than with the Tensor G3. She noted that the TPUs in the Tensor G4 can generate a mobile output of 45 tokens per second. Here's what that means: TPUs are tensor processing units. They help speed up generative AI. Tokens are pieces of words. AI models are like readers who need help, so they break down text into smaller pieces -- tokens -- so they can better understand each part and then the overall meaning. One token is the equivalent of about four characters in English. That means the Tensor G4 can generate roughly three sentences per second. The Tensor G4 is the first processor to run the Gemini Nano with Multimodality model, the on-device AI model that helps your Pixel 9 phone better understand the text, image and audio inputs you make. Google has also upgraded the memory in its Pixel 9 devices -- up to 12 to 16 gigabytes -- so generative AI works quickly and the phone will be able to keep up with future advances. At least until the next big thing comes along. Like the Pixel 9 family, the new Pixel Buds Pro 2 earbuds come with a processor called the Tensor A1 chip, which also powers AI functionality. You can think of the earbuds as another audio interface for Gemini -- but it's one without a screen. You can ask for information from your email, as well as for directions, reminders and song recommendations, but you won't be able to take photos and ask questions. To have a conversation with Gemini Live while wearing the Pixel Buds Pro 2, you first say, "Hey, Google, let's talk live." There's one caveat: You'll need a Google One AI Premium subscription first. This $20-a-month plan provides access to Google's latest AI models, as well as to Gemini in Google properties like Gmail and Docs, along with 2TB of storage. Google is offering a free 12-month subscription to the Google One AI Premium plan to anyone who buys the Pixel 9 Pro, 9 Pro XL or 9 Pro Fold now. "I found myself asking Gemini different types of questions than I do with my phone in front of me. My questions are lot more open-ended," said Sandeep Waraich, product management lead for Google Wearables, of using Gemini Live on the Pixel Buds Pro 2. "There are more walks and talks, longer sessions that are far more contemplative than not." That may be true, but as my CNET colleague David Carnoy pointed out, it looks like you're wearing Mentos candies in your ears while you ask those different questions.
[3]
Google Touts Pixel 9's Gemini AI Capabilities That Don't Require 'Hand-off to AI Providers You May Not Know or Trust'
When introducing the new Pixel 9 models today, Google didn't miss an opportunity to denigrate Apple Intelligence. All of the new Pixel 9 devices are integrated with Google's AI technology, Gemini, and there are a host of new AI features that Google introduced. Gemini is deeply integrated with Google apps and Android and can handle complex queries without hand-off to third-party AI providers you may not know or trust. Apple will use ChatGPT for things like generating text from scratch, something that Apple Intelligence doesn't do. Apple did add Writing Tools, but it is limited to rewriting content that you've already written to change the tone or style. ChatGPT will also be able to generate images that Image Playground cannot, such as photorealistic content, and it will handle requests like generating meal recipes. When a user asks Siri something that Siri cannot handle, the personal assistant will ask for permission to hand it over to ChatGPT. ChatGPT integration is entirely optional, and OpenAI is partnering with Apple to bring it to iPhone users for free. Rumors suggest that Apple also plans to add Gemini integration to the iPhone, allowing iPhone users to choose their preferred AI service. It's not quite clear if Apple Intelligence is incapable of handling full text generation and photorealistic image creation, or if these are capabilities that Apple feels are a little too questionable to touch given the sentiment about AI-created content. Google's new Pixel devices have several interesting AI features, including Pixel Studio, a new image generator that uses an on-device diffusion model. It will generate stickers and images using text prompts, similar to Image Playground. AI will be able to edit photos to add objects that weren't originally in the image, plus Google plans to use it for custom weather reports. The camera app has an option to merge two photos together so that the person taking a group photo can be in the picture, and it can catalog and remember information from screenshots. There's also a feature called Call Notes, which mimics Apple's call recording feature by recording and summarizing phone calls. Google has also added Gemini Live to the Pixel 9 devices and accompanying earbuds, with Gemini Live allowing users to have full free-flowing conversations with Gemini.
[4]
Google's Pixel 9 Event Proves the Smartphone Race Has Entered a New Era
Expertise Apple | Samsung | Google | Smartphones | Smartwatches | Wearables | Fitness trackers During Google's Aug. 13 event, the company flaunted how its Gemini AI assistant works on phones made by Samsung and Motorola, two companies that Google partners with but also competes against. That may not seem like a big deal. After all, Google is the purveyor of Android, so why shouldn't it feature prominent Android phones in its presentation? But Made by Google, as the name implies, is an event for exactly that: Devices made by Google. The search giant introduced the Made by Google branding eight years ago with the launch of its first Pixel phones as part of an effort to establish itself as not just the proprietor of Android, but also a hardware company to be taken seriously. Back then, it sought to prove that the premium phone market didn't have to be just a two-horse race between Apple and Samsung (although even today, it still largely is). Google's decision to feature phones from Samsung and Motorola -- and emphasize its Gemini assistant -- says a lot about the company's direction, and perhaps the state of the smartphone industry. It's a sign we're in a new era of the smartphone wars that's less about outselling rival phone makers and more about establishing Gemini as the best mobile assistant -- and Android as the best platform for using that mobile assistant. The Android versus iPhone rivalry is a tale as old as the smartphone itself. What's different now, however, is the execution of that rivalry. Instead of racing to create the most sophisticated cameras, sleekest designs and fastest processors, the next phase of the smartphone wars is shaping up to be all about who has the superior mobile assistant. Read more: Why Gemini Isn't Just the Google Assistant 2.0 It all started with ChatGPT, back in late 2022. OpenAI's generative-AI based chatbot exploded in popularity seemingly overnight, gaining an estimated 100 million monthly active users in just two months, according to a study from UBS reported by Reuters in February 2023. The report called it the fastest-growing consumer app in history. Virtual assistants and chatbots are nothing new. Even modern digital helpers like the Google Assistant, Apple's Siri and Amazon's Alexa have existed for the last decade. But ChatGPT's ability to answer complex questions in a conversational and convincing (but not always accurate) manner struck a nerve. It started conversations about whether chatbots and generative AI models -- i.e. AI that can create content in response to prompts -- could fundamentally change the way we use the internet and access information. Ever since then, just about all the major tech companies -- from Google to Samsung, Microsoft and more recently Apple -- have been incorporating generative AI into their most important products. Google I/O, the company's annual developer conference, was a huge showcase for how the firm is infusing more AI smarts into its most popular applications, such as Search and Gmail. Gemini and the bigger role it's playing in Android was, of course, the star of the show. Smartphones are the places where we experience these apps and services the most. So on one hand, it's not surprising that generative AI and virtual helpers are a major part of Google's Pixel efforts. Generative AI is only expected to play a bigger role in smartphones moving forward, with the International Data Corporation predicting that shipments of gen AI phones will grow 364% year over year in 2024. But for an event that's usually about hardware, Google's Gemini-heavy keynote still felt like a departure from its usual approach. Google spent the beginning portion of the event showcasing how Gemini can walk you through math problems and the way Gemini Live (the more conversational paid version) can even understand you when you change your line of thought. The first half of the presentation almost felt like Google I/O part two rather than a traditional product launch event. Google's fresh crop of phones, the Pixel 9, 9 Pro, 9 Pro XL and 9 Pro Fold, are also loaded with new AI tools for changing the content of photos based on prompts, generating images and searching for bits of information in screenshots by typing a query the way you would in a search engine. In other words, it's the AI software that's truly the star of the show with these new phones. Perhaps that's the result of Google's new direction. The company recently combined its hardware and software consumer product divisions including Pixel, Android, Chrome and more under one umbrella called Platforms and Devices, as The Verge reported in April. Google and Samsung aren't wasting any time collaborating on what the next stage of the smartphone should look like. While they each have their own separate competing Pixel and Galaxy phones with their own AI features, they've been seemingly working more closely than ever on new features and products. Rick Osterloh, who leads Google's Platform and Devices team, even made an appearance at Samsung's most recent Unpacked event in Paris to talk about how the two companies are working together. Some of Samsung's Galaxy AI features -- i.e. the suite of AI-powered software tools for its Galaxy phones -- are powered by Google's models. And Samsung's Galaxy Z Fold 6 can run Google's Gemini helper in split view. Google and Samsung have been partners for years, but their relationship has seemingly deepend recently. The two companies worked together on Google's current Wear OS software for smartwatches, which it reintroduced in 2021. Now, it sounds like they're doing the same for mixed reality. That's a departure from the early 2010s, when Samsung was pushing its own Tizen software. The partnership is a smart one. You can't have Android without Google, but Samsung is the top smartphone maker when it comes to global shipments, according to the IDC. So it makes sense that they would team up on whatever the next evolution of the smartphone looks like -- especially to take on competitors like Apple and OpenAI, which partners with both Microsoft and Apple. These changes also come at a time when new smartphones are lacking the wow-factor they had a decade ago. You could argue that today's phones come with relatively minor year-over-year upgrades, and Verizon's CEO even told CNBC that subscribers are now holding onto their phones for three years. With foldable phones remaining a niche and expensive option, companies like Google are looking to AI to get people excited about new phones again. And of course, to sell more devices. For Google, Gemini is clearly going to be a big part of that mission.
[5]
Google is making Gemini AI part of everything you do with your smartphone - here's how
Google showed off a lot of impressive hardware at the Made by Google event this year, with new Pixel smartphones, earbuds, and smartwatches. But, the company's Gemini AI model was arguably the real star, playing a central or supporting role in nearly every unveiled feature. We've put together the most notable, interesting, and quirky ways Gemini will be a part of Google's mobile future. The most up-front appearance of Gemini came in the form of Gemini Live, which, as the name implies, breathes life into the AI assistant and lets it act much more human. Not only can you talk casually to the Gemini without needing formal commands, but you can even interrupt its response and divert the conversation without having to start over. Plus, with ten new voice options and a better speech engine, Gemini Live is closer to a phone call with a friend/personal assistant than its more robotic forebears. Screenshots is a banal name but is a significant element of the new Pixel 9 smartphone series. The native mobile app uses the Gemini Nano AI model built into the phone to turn your photos into a searchable database automatically. The AI can essentially process an image the way a human would. For example, say you take a picture of a sign with the details of an event. When you open that picture, Gemini will include options to put the event into your calendar, map a route to its location, or even open a webpage listed on the sign. And the AI will enhance the more common searches, like looking for pictures of a spotted dog or brick building. Google is using Gemini and its new smartphones to try to get an edge in the fast-growing AI image generation market with the Pixel Studio app. This text-to-image engine uses Gemini Nano on the smartphone to employ on-device and cloud models like Imagen 3 to create images faster than the standard web portals. The app itself includes a menu for changing the style as well. The biggest caveat is that it won't make human faces. Google didn't say it's because of controversy earlier this year, but it may also just be erring on the side of caution. Another image-based AI feature Google announced is almost the inverse of the face-shy Pixel Studio. Add Me uses AI to create a (mostly) seamless group photo that includes the person taking the photo. All it takes is for the photographer to switch out with someone else. Then, the AI will guide the new photographer on how to set up a second shot and composite the two images into a single image with everyone there. Arguably, both the least necessary use of Gemini's advanced AI and probably the most frequently used is the Pixel Weather app. The Gemini Nano AI model will produce customized weather reports fitting what the user wants to see in the app. It simplifies the customization in subtle but very real ways. There were plenty of other smaller AI highlights throughout the presentation as well. For instance, Android users can overlay Gemini on their screens and ask questions about what's visible. At the same time, the new Research with Gemini tool will tailor research reports to specific questions, probably mostly in academic settings. Other examples aren't out just yet, but Android phones will soon be able to share what they find using the Circle to Search feature.
[6]
What's actually new in Google Gemini AI?
Consumers and investors are sick of AI hype, and Google knows it. "There have been so many promises, so many 'coming soon's, and not enough real-world helpfulness when it comes to AI," Google senior VP Rick Osterloh said at the "Made by Google" event that unveiled new Pixel phones in Mountain View Tuesday. "Which is why today, we're getting real ... we're going to answer the biggest question people have about AI, what can AI do for me?" OK, so did Google live up to that promise? When you strip the keynote of all the bells and whistles -- the celebrity appearances, the jargon about "Tensor Processing Units," the Pixel phone tech specs, the visions of what Gemini might be able to do in the long run -- what was new about the Android experience here? And does any of it qualify as a must-have killer app? Here's a complete list of everything the 90-minute event offered in actual functioning demonstrations. Real-world helpfulness, in other words, as opposed to ads or promises. Poor Dave Citron. In the keynote's most awkward moment, this Google product lead had to invoke "the demo spirits" and switch phones before Gemini would actually display an answer to "check my calendar and see if I'm free when she's coming to San Francisco this year" ("she" being the artist Sabrina Carpenter; Citron had just sent Gemini a photo of her concert poster). "Sabrina Carpenter is coming to San Francisco on November 9, 2024," Gemini eventually responded. "I don't see any events on your calendar during that time." AI reading the text in an image and understanding the context isn't new. The calendar add is, and that's to Google's advantage. In theory, Apple Intelligence will do the same thing when it debuts. Citron's next demos showed how Gemini could draft a letter to a landlord about a broken AC unit, or a professor about a class -- well-trod ground for all AI assistants. Next up, Google VP Jenny Blackburn showed off the Gemini Live voice assistant. They had a chat about science experiments her niece and nephew might like, and after some back-and-forth, settled on making invisible ink. The discussion had a conversational flow. All well and good, except that OpenAI demonstrated its GPT-4o voice assistant, with similarly interruptible conversations, back in May. That feature is currently live for a small group of ChatGPT Plus users, but not all. So Google got there first, we guess? Here's a feature that may be less creepy than it sounds: Call Notes, which "follows up on your phone calls with a completely private summary of the conversation." But don't worry, because it's using Gemini Nano, an AI service that is based entirely on the Pixel 9 phone without requiring cloud access. (The on-device part is not new; Samsung does the same with Galaxy AI.) Score one more success for Gemini Nano on what we're calling the most useful AI feature of 2024. But after that, we got a lot of visual stuff we've seen AI assistants do a dozen times before. To wit: creating a party invite in Pixel Studios, auto-framing in Magic Editor, adding generative AI images to your image, inserting yourself into a family photo or a picture with a celebrity (the new and embarrassingly named "Add Me" feature). Plus stuff that was cute but not AI at all (the "Made You Look" feature that will point your child's attention at the Pixel's rear-facing screen). So, will this feature set be enough to reverse the skepticism that has set in around the AI bubble? Don't count on Gemini to answer that one any time soon.
[7]
Google's Next Gemini Move: An AI Agent That Works Your Apps for You
Lisa joined CNET after more than 20 years as a reporter and editor. Career highlights include a 2020 story about problematic brand mascots, which preceded historic name changes, and going viral in 2021 after daring to ask, "Why are cans of cranberry sauce labeled upside-down?" She has interviewed celebrities like Serena Williams, Brian Cox and Tracee Ellis Ross. Anna Kendrick said her name sounds like a character from Beverly Hills, 90210. Rick Astley asked if she knew what Rickrolling was. She lives outside Atlanta with her son, two golden retrievers and two cats. Google's vision for the future of AI assistants will become a reality via its conversational chatbot interface Gemini Live in the next few months. That was a revelation at the tail end of its Made by Google event in Mountain View, California, on Tuesday, during which the company also showed off its new Pixel 9 phones (including the Pixel 9 Pro Fold), the Pixel Watch 3 and the Pixel Buds Pro 2. Rick Osterloh, senior vice president of platforms and devices at Google, said its next AI assistant, an AI agent known as Project Astra, will bring contextual understanding about where we are and what we're doing to Gemini Live via our phone cameras. While Project Astra sounds like a top-secret mission from NASA, it's actually a prototype from Google's AI research laboratory DeepMind. It extends the concept of an AI assistant from merely a question answerer to what's known as an agent, which can take action on our behalf, like checking dates on a calendar or messaging a friend. All with our permission, of course. The idea is that once we have AI agents, we won't have to open other apps -- we can simply talk to Project Astra (or a similar agent) while it pulls in necessary information from elsewhere on our devices. It's a big opportunity for Google and its competitors as AI and search converge and the way we access information changes. And while Google may win the prize for most futuristic sci-fi moniker, consumer loyalty to an AI agent is still very much up for grabs. There's one small catch with the upcoming integration: Gemini Live, and therefore Project Astra, are available only to Gemini Advanced subscribers, who pay $20 per month for access to Google's latest AI model, Gemini 1.5 Pro. If you fall into that camp, you'll soon be able to share your camera during a conversation with Gemini to ask questions about what's in front of you, whether it's a calculus problem you don't know how to solve or furniture you're struggling to assemble. Gemini Live will also be able to pull in information from apps like Google Calendar and Gmail to help answer your questions and share information without leaving the Gemini Live interface, Osterloh said. We've seen similar functionality from AI startup OpenAI. At its Spring Update in May, OpenAI introduced conversational interactions with its ChatGPT chatbot, as well as the ability to share photos, videos and documents to help inform those conversations. The voice functionality, known as Advanced Voice Mode, went live earlier this month for a small group of testers. Both Project Astra and Gemini Live were introduced at the Google I/O developer event, which was also in May. "We're evolving Gemini to be even more agentive, to tackle complex problems with advanced reasoning, planning and memory, so you'll be able to think multiple steps ahead, and Gemini will get things done on your behalf, under your supervision," Osterloh said as Made By Google wrapped up. "That's the promise of a true AI assistant."
[8]
Google brings more Gemini AI features to Android
Video Today, Google said it will add more Gemini AI features to Android smartphones - though what's said to be best of that functionality will be exclusive to its new Pixel 9 line of handhelds. The Chocolate Factory unveiled this latest stuff at its Made By Google event, which unsurprisingly focused largely on the Android ecosystem and machine learning. The web titan said its generative Gemini assistant for Android has been "completely rebuilt," and now incorporates the Gemini Flash 1.5 model, all in the hope of making the thing better. You know the drill with this kind of software: You interact with it with natural language, ask it things, it answers. Only a few of these new features intended for all Android phones - ones that run Android 10 or newer - are available today. The Gemini overlay feature can now take a screenshot of an app and then answer questions about it; users can also ask it to generate images and then drop them immediately into Gmail and Google Messages. These are the prime examples of the "deep integrations" Google boasts of with Gemini and Android. While this humble vulture was able to get both of these features to work just fine, personally, I have to wonder why I need Gemini to describe what I'm looking at. There's always the accessibility side, for sure, which Google's also boosted with additional functionality, described here. Perhaps there's just a lot of people out there baffled by the apps they use or the pages they read and need a hand; maybe they'll find this feature helps then extract info from their screens (there's also text search support in the screenshot-taking app). This feature seems to cover YouTube, where again if you're not sure what you're looking at, the assistant might be able to help you pull out certain information from it. "If you're using YouTube, ask questions about what you're watching," Google veep Sissie Hsiao suggested. "Let's say you're preparing for a trip abroad and have just watched a travel vlog -- tap 'Ask about this video' and ask for a list of all the restaurants mentioned in the video -- and for Gemini to add them to Google Maps." A far more interesting feature is Gemini Live, which is supposed to be a voice-based assistant that you can have a real conversation with, even to the point where interrupting it is okay. The AI pal is supposed to come out today for Gemini Advanced subscribers (a $20 monthly plan) tho at least on my Samsung Galaxy S23, it's not yet available. Hopefully it'll be real this time. Further additions to Gemini will apparently allow users to ask the AI to dig through your emails and find what you're looking for, perhaps some ingredients for a recipe someone sent you, the ad biz proposes. Users will also be able to ask Gemini to generate playlists (with real songs, to be clear) and handle calendar stuff. Notes from phone calls can be automatically generated by on-device artificial intelligence. "If you need information like an appointment time, an important address or a phone number to call back, turn on Call Notes and all the details and transcript will be available in the call log," says Google. "To protect privacy, Call Notes runs fully on-device and everyone on the call will be notified if you have activated the feature." An "add me" function allows you to take two photos and merge them so that everyone in the pictures can be seen together, even therefore the ones taking the shot if someone swaps over, the goal here being that no one gets missed out. That's among other AI-based functionality seemingly available for Pixel 9 devices. If you want to find out more, see the pages we've linked to or scrub through the full event below. Youtube Video The Chocolate Factory promises it'll treat users' private data with respect, saying that "only Gemini can do all of this with a secure, all-in-one approach that doesn't require hand-off to a third-party AI provider you may not know or trust." However, Google hasn't exactly got off to a good start with Gemini and privacy concerns. There's not that much on offer for the entire Android ecosystem so far, and for those that want much more, they'll have to get one of the four Pixel 9 models that launch from August 22. Google promises Pixel 9 owners will get some fairly robust features: A text-to-image generator, a search engine for saved images, a weather app (but with AI!), comprehensive and easy photo editing, and higher quality calls. All of this runs locally on the Pixel 9's new Tensor G4 chip and through the Gemini Nano multimodal model. That these are Pixel-exclusive features might be partially down to the fact that not all phones have the horsepower to run AI workloads on-device; the G4 is claimed to be more powerful than earlier iterations. But part of it is undoubtedly Google wanting its own killer AI features, and it also gets to say the Pixel 9 is more private than other Android phones thanks to its on-device AI capabilities. The Pixel 9 lineup and its unique AI features are seemingly Google's answer to Apple Intelligence, which launched in beta with many similar features just a couple of weeks ago. It's up in the air which company's AI is better, but Cupertino does have a monopoly on AI-generated emojis. The Pixel 9 doesn't come cheap, with the base model (the regular Pixel 9) starting at $799 with 128GB and 256GB storage capacities; 6.3in OLED screen; 4,700 mAh battery; software updates for seven years; and a Tensor G4 processor with 12GB of RAM. Google also offers three Pro models: the $999 Pixel 9 Pro (as regular but with 128GB, 256GB, 512GB, or 1TB storage; 16GB RAM; LTPO OLED screen), the $1,099 Pixel 9 Pro XL (as Pro but with 5,060 mAh battery; 6.8in LTPO OLED screen), and the $1,799 Pixel 9 Pro Fold (second-gen foldable 8in screen device coming in September). As an added bonus, Pixel 9 owners get Gemini Advanced for a year, which would normally cost $240. Interestingly, Google's adding what's dubbed satellite SOS support to Pixel 9 devices this year in the US on Android 15, when that arrives, and possibly other countries later. For America, users will get two years of service for free. The idea, like what Apple added to its phones, is that if you're in an emergency and can't get any regular cellular or Wi-Fi signal, your handheld can attempt to use satellite connectivity to raise the alarm to responders and select contacts for you. Handy if you're lost in the middle of nowhere. Google additionally announced Pixel Watch 3, which starts at $349.99. It too of course has AI features, such as automatically setting a bedtime mode based on sleeping habits, providing recommendations on running better, and detecting when a user's heart has suddenly stopped, which can trigger an automatic call to 911. Hopefully that last feature works correctly, because it would be really awkward to have to tell paramedics that you're not dead. ®
[9]
Google unveils new AI-powered phones, new chatbot capabilities
STORY: Alphabet's Google showed off its new Pixel phones with deeper AI features as well as live demos of its Chatbot Gemini on Tuesday. One Pixel-only feature lets users search for information stored in screenshots. Android users can also now pull up Gemini as an overlay on top of another app to answer questions or generate content. Rick Osterloh oversees Google's Android, Chrome, and hardware: "With your permission, it can offer unparalleled, personalized help, accessing relevant information across your Gmail inbox, your Google Calendar, and more." The new Pixel 9 series phones have been redesigned with newer chips and cameras, and the base model starts at $799, up $100 from the last generation. Meanwhile Brian Rakowski, a top executive in the Pixel division introduced the newest folding version of Google's phones: "It's the largest display in a phone, and it's 80% brighter than the first generation. This big screen gives you so much more surface area for entertainment, productivity, multitasking, and content creation." Google also announced new versions of its smartwatch and wireless earbuds. The event bucks Google's tradition to announce a new version of its gadgets in the fall. That timing is its latest bid to keep up with rivals in injecting AI features into its consumer-facing products. :: File It also comes ahead of Apple's planned launch of a new iPhone in September. In June, Apple said its latest devices would get upgrades that include "Apple Intelligence," a slew of generative AI-powered features within native apps, and an integration with ChatGPT.
[10]
Google launches enhanced Pixel phones in bid to leverage AI tech
MOUNTAIN VIEW, California - Alphabet's Google on Tuesday unveiled a lineup of new Pixel smartphones with deeper integrations of its artificial intelligence technology as it races to incorporate AI into its hardware. The upgrades include a Pixel-only feature that lets users search for information stored in screenshots. Android users can also now pull up Gemini, Google's chatbot, as an overlay on top of another app to answer questions or generate content. "There have been so many promises, so many coming-soons, and not enough real-world helpfulness when it comes to AI, which is why today we're getting real," said Rick Osterloh, Google's senior vice president of devices and services. "We're fully in the Gemini era," he told engineers, executives, analysts and media attending the bigger-than-usual event at Alphabet's Bay View campus in Mountain View, California. The event bucked another tradition: the latest versions of its Pixel smartphones were announced in the summer rather than in autumn as Google had done with every iteration of the device since its launch in 2016. "I've been to a lot of Google events and not only was this one of the most elaborate, but it was one of the most complete," said Avi Greengart, lead analyst at Techsponential. He said Google demonstrated that it was at the forefront of AI. The earlier timing of the event is Google's latest bid to keep up with rivals in injecting AI features into its consumer-facing products and comes ahead of Apple's planned launch of a new iPhone in September. In June, Apple announced that devices including its latest version of iPhones would get upgrades that include "Apple Intelligence," a slew of generative AI-powered features within native applications, and an integration with ChatGPT, the chatbot developed by Microsoft-backed OpenAI. Google employees showcased several live demos of new Gemini functions, such as a voice conversation feature, though an attempt to use Gemini to cross-reference a picture of a concert poster with the calendar app took three tries and two devices to run successfully. Pixel 9, the base 6.3-inch display model, will retail at a starting price of $799, which is $100 more than the previous model. This and the 6.8-inch Pixel 9 Pro XL will begin shipping later in August, a company spokesperson said. The Pixel 9 Pro, which comes with added features like a better camera, and the foldable Pixel 9 Pro Fold will ship in September. The new gadgets are available to preorder on Tuesday. 'MANAGE MY LIFE BETTER' "The two things that (consumers are) looking at AI to do right now is organization -- and that's across communications, across calendaring, basically manage my life better than I can -- and then the other thing is content creation," said IDC analyst Linn Huang. "I think Google nailed both." Google holds less than 1% market share in global smartphone shipments as of the second quarter of 2024, according to IDC. It trails far behind Samsung's market share of 18.9% and Apple's market share of 15.8%, in part because Google has entered fewer markets and is focused on higher-end price segments. In the United States, Google's 4.5% share makes it the fourth-biggest smartphone maker. The Pixel line has also enabled Google to show off advances and spur the developer ecosystem around its Android operating system, which is used by device manufacturers like Samsung. Android, globally, is installed on more than 80% of smartphones. Android represents one of several frontlines where Google is battling competitors to embed AI in ways that consumers will use. In May, it debuted a swath of upgrades to core products like its search engine. The company's engineers redesigned the Pixel's exterior and included camera upgrades as well as Google's new Tensor G4 chip. Google announced new versions of its smartwatch, the Pixel Watch 3, and Pixel Buds Pro 2 wireless earbuds on Tuesday as well. Google also added a "Loss of Pulse" feature to the new Pixel Watch. The feature uses algorithms to determine whether a user's heart has stopped and can contact emergency services. The feature will be available in the United Kingdom and the European Union. Also on Tuesday, Google and Peloton, the fitness company known for its stationary bike, announced a content partnership in which subscribers to Google's Fitbit Premium service would gain access to a library of Peloton's training classes. (Reporting by Kenrick Cai and Max A. Cherney in Mountain View, California; Editing by Sayantani Ghosh and Matthew Lewis)
[11]
Google's new phones aren't really about the phones
Google has been in the smartphone business for a long while now -- with seemingly little to show for it. But even the company that powers 91% of the world's internet searches needs a little extra help sometimes. At an event Tuesday afternoon, Google unveiled the ninth generation of its Pixel smartphone line. These are the phones that Google designs fully in-house, which typically draw high praise from reviewers but generate little in the way of actual sales. Google's Pixel phones accounted for just under 1% of global smartphone shipments last year and the first half of this year, according to data from Counterpoint Research. But the new phones are giving Google and its parent, Alphabet, a chance to lean even harder into generative artificial intelligence. New AI features, such as an image-generating app called Pixel Studio and another app that scans screenshots for content, took up the majority of time at Tuesday's event -- far more than any hardware design elements. And some new features won't even be exclusive to the Pixel, as Google wants its AI tools widely dispersed across other phones running on its Android mobile operating system. Gemini Live -- a conversational AI chatbot powered by voice -- is rolling out to all compatible Android phones in English on Tuesday, and Google said a version for Apple's iOS devices is also coming soon. The catch? Gemini Live and other AI features are only available to subscribers of Google's Gemini Advanced plan. That plan costs $20 a month, though Google is throwing in a year of free access to buyers of its higher-end Pixel Pro, XL and Fold devices. That might seem steep -- especially since Android users are already accustomed to having Google's well-known digital assistant free of charge. But Google has actually proven more adept at upselling once-free services than it has in selling premium smartphones. Non-advertising revenue from YouTube -- much of which comes from viewers wishing to avoid ads -- totaled $11.9 billion last year, which is 20% higher than Google's total hardware revenue for the year, according to consensus estimates from Visible Alpha. YouTube's non-ad revenue has also averaged 52% annual growth over the past four years, compared with 16% for all Google hardware, according to those same estimates. Upselling AI on a mobile device is still no sure thing, though. Particularly given some high-profile stumbles by Google over the past 18 months, as it has raced to stay competitive with Microsoft and its anointed AI partner, OpenAI. And Google now faces the added challenge of competing with Apple's generative AI debut. Apple Intelligence -- the iPhone maker's moniker for a set of new AI tools designed for its devices -- is set to launch this fall. But while Google may be a bit player in hardware, its products like search, Gmail and Android are effectively the world's largest distribution network for new technologies such as generative AI. And its Pixel event Tuesday proved a bit of a flex in that regard, coming a month ahead of when Apple typically introduces its new iPhones. Rick Osterloh, who runs Google's device business, emphasized repeatedly Tuesday that the AI services shown at the event are ready for launch -- another subtle dig at the more gradual rollout expected for Apple Intelligence. Google's little phone business is clearly still looking to punch above its weight.
[12]
AI overshadowed Pixel at the Pixel event
A few months ago at Google I/O, we shared a broad range of breakthroughs to make AI more helpful for everyone. We're obsessed with the idea that AI can make life easier and more productive for people. It can help us learn. It can help us express ourselves. And it can help us be more creative. The most important place to get this right is in the devices we carry with us every day. So we're going to share Google's progress in bringing cutting-edge AI to mobile in a way that benefits the entire Android ecosystem. For the first 25 minutes of the show, Osterloh and his colleagues didn't make any announcements about the Pixel 9 lineup, the Pixel Watch 3, or the Pixel Buds Pro 2. Instead, they highlighted things like Google's investments in its tech stack and Tensor chips, how all six of its products with more than 2 billion monthly users (Search, Gmail, Android, Chrome, YouTube, and Google Play) harness the company's Gemini AI models in some way, and how Gemini and Google's AI tools are integrated with other Android phones that you can already buy. Even before showing demos on its phones, Google was showing its AI tools onstage on phones from Samsung and Motorola.
Share
Share
Copy Link
Google unveils Gemini AI integration across its ecosystem, challenging Apple's AI efforts. The Pixel 9 and Pixel Buds Pro 2 showcase advanced AI capabilities, signaling a new era in smartphone technology.
Google has made a bold move in the artificial intelligence arena with the introduction of Gemini AI, integrated across its ecosystem of devices and services. This strategic deployment of AI technology appears to be a direct challenge to Apple's rumored AI advancements, potentially stealing the thunder from the Cupertino-based company's upcoming announcements 1.
At the core of Google's latest offerings is Gemini AI, a sophisticated artificial intelligence system that promises to revolutionize user interactions with technology. The Pixel 9 smartphone and Pixel Buds Pro 2 are set to showcase Gemini's capabilities, offering enhanced features and functionalities powered by on-device AI processing 2.
In a move that has raised eyebrows in the tech community, Google's presentation included what appeared to be a veiled criticism of Apple's Siri. By highlighting Gemini's advanced conversational abilities, Google seems to be positioning its AI as superior to existing voice assistants 3.
The integration of Gemini AI into Google's hardware lineup signals a shift in the smartphone industry. No longer is the focus solely on hardware specifications; instead, the emphasis is now on AI-driven experiences that can adapt and improve over time. This approach is set to redefine what users can expect from their mobile devices 4.
Google's strategy involves incorporating Gemini AI into various aspects of smartphone usage:
As Google pushes forward with its AI-first approach, the tech industry is watching closely. The success of Gemini AI could potentially reshape the competitive landscape, putting pressure on other major players like Apple and Samsung to accelerate their own AI developments. The coming months will be crucial in determining whether Google's gambit pays off and if consumers embrace this new era of AI-integrated smartphones.
Reference
[1]
[3]
Google introduces Gemini Live, a premium AI-powered chatbot to rival OpenAI's ChatGPT. The new service offers advanced features but faces scrutiny over its pricing and rollout strategy.
6 Sources
Reports suggest Google's Gemini AI integration with Apple Intelligence may be delayed until 2025, while a standalone Gemini app becomes available for iOS users. This development highlights the evolving AI landscape on iPhones and raises questions about exclusivity and competition in the AI market.
47 Sources
Google has made Gemini Live, its conversational AI assistant, freely available to all Android users. This move brings advanced voice AI capabilities to a wider audience, challenging competitors in the AI assistant space.
7 Sources
Google's Gemini AI has made its way to iPhones, offering users quick access to advanced AI capabilities through a home screen widget. This move marks a significant step in the integration of AI technology into mobile devices.
2 Sources
Google has launched its latest flagship smartphones, the Pixel 9 and Pixel 9 Pro, showcasing advanced AI capabilities and improved hardware features. The new devices aim to leverage Google's AI technology to enhance user experience and compete in the premium smartphone market.
16 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved