6 Sources
[1]
Google Thinks AI Can Make You a Better Photographer: I Dive Into the Pixel 10 Cameras
If a company releases new phone models but doesn't change the cameras, would anyone pay attention? Fortunately that's not the case with Google's new Pixel 10, Pixel 10 Pro and Pixel 10 Pro Fold phones, which make a few advancements in the hardware -- hello, telephoto camera on the base-level Pixel for the first time -- and also in the software that runs it all, with generative AI playing an even bigger role than it has before. "This is the first year where not only are we able to achieve some image quality superlatives," Isaac Reynolds, group product manager for the Pixel cameras, told CNET, "but we're actually able to make you a better photographer, because generative AI and large models can do things and understand levels of context that no technology before could achieve." Modern smartphone cameras must be more than glass and sensors, because they have to compensate for the physical limitations of those same glass and sensors. You can't expect a tiny phone camera to perform as well as a large glass lens on a traditional camera, and yet the photos coming out of the Pixel 10 models surpass their optical abilities. In a call that covered a lot of photographic ground, Reynolds shared with me details about new features as well as issues of how we can trust images when AI -- in Google's own tools, even -- is so prevalent. The new Pro Res Zoom feature is likely to get the most attention because it strives for something exceptionally difficult in smartphones: long-range zoom that isn't a fuzzy mess of pixels. You see this all the time: Someone on their phone spreads two fingers against the screen to make a distant object larger in the frame. Photographers die a little each time that happens because, by not sticking to the main zoom levels -- 1x, 2x, 5x and so on -- the person is relying on digital zoom; the camera app is making pixels larger and then using software to try to clean up the result. Digital zoom is certainly better than it once was, but each time it's used, the person sacrifices image quality for more zoom in the moment. Google's Super Res Zoom feature, introduced with the Pixel 3, interpolates and sharpens the image up to 30x zoom level on the Pixel 10 Pros (and up to 20x zoom on the Pixel 10 and Pixel 10 Pro Fold). The new Pro Res Zoom on the Pixel 10 Pro pushes way beyond that to 100x zoom -- with a significant lift from AI. Past 30x, Pro Res Zoom uses generative AI to refine and rebuild areas of the image based on the underlying pixels captured by the camera sensor. It's similar to the technology that Magic Editor uses when you move an object to another area in the image, or type a prompt to add things that weren't there in the first place. Only in this case, the Pixel Camera app creates a generative AI version of what you captured to give the image crisp lines and features. All the processing is performed on-device. Reynolds explained that one of the factors driving the creation of Pro Res Zoom was the environments where people are taking photos. "They're taking pictures in the same levels of low light -- dinners did not get darker since we launched Night Sight," he said. "But what is changing is how much people zoom, [and] because the tech is getting so much better, we took this opportunity to reset and refocus the program on incredible zoom quality." Pro Res Zoom works best on static scenes such as buildings, skylines, foliage and the like -- things that don't move. It won't try to reconstruct faces or people, since generative AI can often make them stand out more as being artificially manipulated. The generated image is saved alongside the image captured by the camera sensor so you can choose which one looks best. What about consistency and accuracy of the AI processing? Generative AI images are built out of pixel noise that is quickly refined based on the input driving them. Visual artifacts have often gone hand-in-six-fingered-hand with generated imagery. But that's a different kind of generative AI, says Reynolds. "When I think of Gen AI in this application, I think of something where the team has spent a couple of years getting it really tuned for exactly our use case, which is image enhancement, image to image." Initially, people inside Google were worried about artifacts, but the result is that "every image you see should be truly authentic to the real photo," he said. This new feature seems like a natural evolution -- and by "natural," I mean "processor speeds have improved enough to make it happen." The Best Take feature was introduced with the Pixel 8, letting you capture several shots of a person or group of people, and have the phone merge them into one photo where everyone's expressions look good. CNET's Patrick Holland wrote in his review of the Pixel 8, "It's the start of a path where our photography can be even more curated and polished, even if the photos we take don't start out that way." That path has led to Auto Best Take, which does it automatically -- and not just grabbing a handful of images to work with. Says Reynolds, "[It] can analyze... I think we're up to 150 individual frames within just a few seconds, and pick the right five or six that are most likely to yield you the perfect photo. And then it runs Best Take." From the photographer's point of view, the phone is doing all the work, though, as with Pro Res Zoom, you can also view the handful of shots that went into the final merged image if you're not happy with the result. The shots are full-resolution and fully processed as if you'd snapped them individually. "What's interesting about this is you might actually find in your testing that Auto Best Take doesn't trigger very often, and there's a very particular reason for that," said Reynolds. "Once the camera gets to look at 150 items, it's probably going to find one where everybody was looking at the camera, because if there's even one, it'll pick it up." Another improvement enabled by the Pixel 10 Pro's Tensor G5 processor is a new high-resolution Portrait mode. To take advantage of the wide camera's 50-megapixel resolution, Reynolds said the Pixel team rebuilt the Portrait mode model so it creates a higher quality soft-background depth effect, particularly around a subject's hair. Real Tone, the technology for more accurately representing skin tones, is also incrementally better. As Reynolds explained, Real Tone has progressed from establishing color balances for people versus the other areas of a frame to individual color balances for each person in the image. "That's not just going to mean better consistency shot to shot, it means better consistency scene to scene," he said, "because your color, your [skin] tone, won't depend so strongly on the other things that happened in the image." He also mentioned that a core component of Real Tone has been the ability to scale up image quality testing methods and data collection in the process of bringing the feature's algorithms to market. "What standards are we setting for diversity and equity, inclusion across the entire feature set?" he said. "Real Tone is primarily a mission and a process." One other significant photo hardware improvement has nothing to do with the cameras. On the Pixel 10 Pro Fold, the Pixel Camera app takes advantage of the large internal screen by showing the previous photo you captured on the left side of the display. Instead of straining to see details in a tiny thumbnail in the corner of the app, Instant View gives a full-size shot, which is especially helpful when you're taking multiple photos of a person or subject. So far, these new Pixel 10 camera features are incorporated into the moment you capture a photo, but Reynolds also wants to use the phones' cameras to encourage people to become better photographers. Camera Coach is an assistant that you can invoke when you're stuck or looking for new ideas while photographing a scene. It can look at the picture you're trying to take and help you improve it using suggestions such as getting closer to a subject for better framing or moving the camera lower for a more dramatic angle. When you tap a Get Inspired button, the Pixel Camera app looks at the scene and makes suggestions. "Whether you're a beginner and you just need step-by-step instructions to learn how to do it," said Reynolds, "or you're someone like me who needs a little more push on the creativity when sometimes I'm busy or stressed, it helps me think creatively." All of this AI being worked into the photographic process, from Pro Res Zoom to Auto Best Take, invariably brings up the unresolved question of whether the images we're creating are genuine. And in a world that is now awash in AI-generated images that look real enough, people are naturally guarded about the provenance of digital images. For Google, one answer is to label everything. Each image captured by the Pixel 10 cameras or touches Google Photos is tagged with C2PA Content Credentials (Coalition for Content Provenance and Authenticity), even if it's untouched by AI. It's the first smartphone with C2PA built in. "We really wanted to make a big difference in transparency and credibility and teaching people what to expect from AI," said Reynolds. "The reason we are so committed to saving this metadata in every Pixel camera picture is so people can start to be suspicious of pictures without any information." Marking images that have no AI editing is meant to instill trust in them. "The image with an AI label is less malicious than an image without one," said Reynolds. "When you send a picture of someone, they can look at the C2PA in that picture. So we're trying to build this whole network that customers can start to expect to have this information about where a photo came from." Scanning the specs of the Pixel 10 cameras, listed below, you'd rightly notice that they match those found on last year's Pixel 9 models, but a couple of details stand out. For one, having a dedicated telephoto camera is no longer one of the features that separates the entry-level Pixel from the pro models. The Pixel 10 now has its own 10.8 megapixel, f/3.1 telephoto camera with optical image stabilization that offers a 5x optical zoom and up to 20x Super Res Zoom. It's not as good as the 48-megapixel f/2.8 telephoto camera used in the Pixel 10 Pro and Pixel 10 Pro XL (the same one used in the Pixel 9 Pros), but that's not the point. You don't need to give up extra zoom just to buy a more affordable phone. Another difference you'll encounter, particularly when recording video, is improved image stabilization. The optical image stabilization is upgraded in all three phones, but the stabilization in the Pixel 10 Pros is significantly improved. Although the sensor and lens share the same specs as the Pixel 9 Pro, the wide-angle camera in the Pixel 10 Pro models necessitated a new design to accommodate new OIS components inside the module enclosure. Google says it doubled the range of motion so the lens physically moves through a wider arc to compensate for motion. Alongside that, the stabilization software has been tuned to make it smoother.
[2]
I tried every new AI feature on the Google Pixel 10 series - my thoughts as an AI expert
The generative AI explosion means that nearly every phone launch in the past year has been accompanied by its own suite of AI features. Google's launch of its Pixel 10 Series is no different, with some new features Google hopes might draw in new users from other devices. Also: Everything announced at Made by Google 2025: Pixel 10 Pro, Fold, Watch 4, and more At its Made by Google event, Google unveiled the new Pixel 10 Pro and 10 Pro XL, Pixel 10 Pro Fold, and Pixel 10, as well as the Pixel Watch 4 and Pixel Buds 2a. Beyond fun new phone colors, better camera systems, and other hardware upgrades, AI is at the core, powered by the new Google Tensor G5 processor. The chipset, co-designed with Google DeepMind, can run Gemini Nano on the device, the engine powering all these new AI experiences. Here's a full round-up of the new AI features and my personal hands-on experiences with each. While AI tools like chatbots can be helpful, they require you to context-switch to other tabs, ask your question, go back to whichever window you were originally working out of, and paste it in. In order to upgrade that experience and level up the assistance, AI needs to predict your needs -- which is exactly what Magic Cue aims to do. Also: I tried the standard Google Pixel 10 and didn't miss the Pro models one bit The Magic Cue feature suggests relevant information and actions based on what you're doing. For example, in the demo featured in the image above, the user was calling an airline, so Magic Cue automatically surfaced the flight details to prevent scrambling to find them while on the call. In another demo, the user got a text asking where the reservation was. Instead of the demoer having to find the details, Magic Cue surfaced the reservation information based on what was in the user's inbox. Then, all the user had to do was tap and send it. Since the feature leverages the Google Tensor G5 chip, information is processed locally on the device. Sticking with the trend of making information as accessible as possible, the new Daily Hub feature found in your Discover feed puts all of the information you need in one place. It also includes an integration with Magic Cue, which gathers insights from your apps. As seen in the image, it can remind you of actions from your Google Keep and Gmail, including dinner plans, reservations, and flight details. In practice, it is very similar to Samsung's At a Glance feature, just native to the Google experience powering Pixel. One use case where generative AI really excels is language translation. Because LLMs have a deep grasp of language and how people speak -- including conversational and non-linear speech -- speech translation has gotten a lot more accurate. As a result, many smartphone manufacturers, like Apple, are incorporating AI translation in their own products, and Google unveiled its own. With Voice Translation, you can hear a translation in real time while on a phone call. The most noteworthy part is that it copies the sound of the speaker's voice, making the new audio sound as free-flowing and natural as possible. There is also a really helpful transcript you can follow to help keep track of the dialogue even further. Also: I'm a longtime iPhone user, but the Google Pixel 10 has me reconsidering my loyalty When I demoed the feature, I was pleasantly surprised at how quick it was, and most importantly, at how well it copied my voice to make me sound like I was actually speaking German. At rollout, Voice Translate works when translating to or from English with Spanish, German, Japanese, French, Hindi, Italian, Portuguese, Swedish, Russian, and Indonesian. A good chunk of the new AI features are focused on elevating your photo-taking and editing experience. The new Camera Coach feature aims to take the guesswork out of taking a photo, using Gemini's multimodal capabilities to help you snap the perfect shot by suggesting angles, lighting, and camera modes. Beyond step-by-step instructions, Gemini will even generate a sample image of what you should aim for in the final product. For example, in a demo, the user was trying to take a photo of a plant leaf but had the angle off-center. Gemini generated the image of the leaf positioned correctly, so then all the demoer had to do was match the visual by adjusting his position. Also: Google just copied the worst feature of modern iPhones (but not all hope is lost) One of the features I was most impressed by was Camera Coach's ability to recognize what was being shown in the display, down to the specific location. For example, in the image above, the Camera Coach asked the demoer what they wanted to be shown in the photo by identifying that it was a waterfront view, and that it was Chelsea Piers in New York, alongside the Hudson River. Producing a good photo is equal parts capturing the right shot and post-production editing to optimize it. However, photo editing can often be tricky (especially if you're inexperienced), as there are so many little tools often used in combination. This new feature allows you to describe the change you want using natural language and have the AI make the changes instantaneously. Also: I tried Pixel Watch 4 - and these are my 7 favorite upgrades in Google's new watch For example, the image at the top of the article was very nicely framed, but it had a distracting glare. Instead of having to find the right tools, you can just say that you'd like to remove the glare, and in a couple of seconds, it's done. Other, more general applications include asking it to "make it better" resulted in actions like straightening it out and improving the lighting. While the Add Me and Auto Best Take features are not new, Google upgraded them to make them even more helpful. The Add Me feature, which lets people add the photographer to the photo, now works with photos with an even bigger group of people. Meanwhile, Auto Best Take automatically finds the best picture in which everyone looks great, where previously you would have had to manually select which ones you wanted to combine. Google's AI Pro Plan costs $20 per month and packages all of Google's best AI offerings. These include the best models in Gemini and standalone tools, including expanded access to Gemini 2.5 Pro, NotebookLM, Deep Research, Veo 3, and Jules.
[3]
My favorite Google AI features from the Pixel 10 launch
The Made By Google event was not only a showcase of Google's latest Pixel hardware, but a launchpad for many new AI features. I'm typically skeptical of the current generation of AI, but as I checked out the new software across various demo sessions, I found myself more and more intrigued. It seems like Google, along with Apple and Samsung, has been working on making these AI-powered updates more helpful in a way that might actually make our lives easier or simply more fun. There wasn't enough time to write up every single one of them, so I've put a few of my favorites in this story to give you a better sense of what to expect when the Pixel 10 series hits retail shelves later this month. Spoiler alert: Many of these have to do with voice and calls -- an area Google has historically excelled at. I have long been enamored with Google's Recorder app. It started with the on-device transcription that made getting quotes from my interviews easy and relatively secure. But when Apple introduced a multi-track recording function to its Voice Memos app, I quickly jumped ship. While the iOS recorder has inferior transcription in terms of accuracy and readability, the fact that I could basically record a duet with myself seriously appealed to the musical theater geek in me. I played both Elphaba and Glinda, crooning their parts from "For Good" into my iPhone. But when Google's senior director of product management for Pixel software Shenaz Zack told me the Pixel 10's recorder app would add AI-generated music to your singing, I went silent in slight disbelief. I spent much of my youth ripping karaoke tracks from YouTube videos, looking up "minus one" or "backing tracks" or "instrumentals only" on various download platforms. My friends and I were aspiring performers, looking to mix our own covers of popular songs, and a tool that would generate backing music to our voice tracks would have been a dream come true. Honestly it kind of still is. Zack walked me through the process twice -- on my first try I sang a verse and part of the chorus of "Golden" from the Kpop Demon Hunters soundtrack. I giggled self-consciously at the end, before Zack hit stop. As it recorded, the app actually showed a tag that indicated it knew I was singing, and when we selected the recording after, a chip appeared saying "Create and add music." Tapping that brought up a panel titled "Choose a vibe to create music" with two sections: Featured vibes and Your vibes. Under the first one, the options were "Chill beats," "Cozy," "Dance party," "Rainy day blues," "Romantic" and "Surprise me." On my second attempt, when I rushed through a rendition of the all-time banger "Mary Had a Little Lamb," the app displayed a warning at the bottom that said "The beat might not match well if the recording is short." I chose Dance Party, hit next, and waited a minute or so while Recorder went to work. The animation at the top said the system was analyzing the audio, identifying the rhythm, locking onto the beat and harmonizing the track before delivering the result. I don't quite know what I was expecting, but I can say that those who were at all concerned about digital rights management have nothing to worry about. The music that Google generated for "Golden" sounded nothing like the original, and while it did make my voice sound less lonely and made for a more complete track, I felt like I needed a few more adjustments to feel satisfied with it. As for "Mary Had a Little Lamb," the result was as generic as expected for an AI-generated soundtrack to a very basic nursery rhyme. To Google's credit, what came out seemed to be in the right key and rhythm, and I certainly will need much more time playing around with this to see if tweaking the settings will help. I also wanted to point out that the generated music also stopped as my singing stopped, so the giggling I mentioned earlier was not scored. Although this feature did not live up to my (admittedly unrealistic) fantasy, I do think it's a fun use of AI and seems harmless. It's not going to be a mainstay of most people's daily routines, although Zack did say that a large percent of people actually used Recorder for singing. This update could certainly make for a nice little dose of musical creativity. I had more concerns around the Voice Translate feature that was supposed to make you or your caller sound like you were speaking in a different language. According to Google, the goal is to "break down language barriers during phone calls." When I asked Zack why the company felt the need to make the voice resemble the caller's, she said it was about personal connection. Zack explained that her parents live in India, and though they speak English, they're not very fluent. That makes for some difficulty when they call Zack's kids. Simply adding a robotic voice that's translating between the grandparents and the children wouldn't feel right, either. I was initially skeptical that fully replacing the caller's original voice with a translated version would help, but after a few demos, I am certainly swayed. To be clear, the person placing the call has to do so from a Pixel phone for Voice Translate to work. Once you choose Voice Translate from the Call Assist submenu, you'll have to choose a language. When the call is connected, the system will say to both parties that the "Call is translated by Google AI in each speaker's voice. Audio is not saved." I tried this out a few times with a Google representative who spoke German, whom we will refer to as "Uncle Tim" to make it easier for me to describe this demo. Each time he spoke, I could hear a couple seconds of his voice in German, before a chime played and the version in the original language became softer. What sounded like a dubbed actor playing Uncle Tim came on and conversed in English, complete with realistic replications of pitch, rhythm and expression. I also could hear feedback when I talked on the call, so I heard myself speaking German on the other end. It was truly strange, because it sort of did sound like me. One of my closest friends lives in Germany, and has had to put up with my attempts to learn German for more than 10 years. I immediately wanted to try Voice Translate on her to see if she would believe I had suddenly become fluent (but of course, I'd have to figure out how to get her to ignore the warnings that Google AI was at work). I'll be honest, the experience wasn't perfect. Not only were the translations sometimes off (some of what Uncle Tim said in English didn't make sense), the generated voices seemed less like a complete replication of the caller and more like a novice dubbing artist. That's not a bad thing, since I was very concerned about impersonation being a problem. To that end, Zack said Google was deliberate about the implementation. She reminded me of the "ducking" that was in place, which is when the original speech is still audible in the first few seconds and then softer throughout. Like the original audio is ducking below the dubbed voice -- get it? And I remembered that while the AI voice might sound sort of like me, it isn't designed to simply make up things I'm saying -- it's just translating the content. I'm the one that decides whether to go off and curse out a relative and have that conveyed in their native tongue, for example. Of course, there may still be bugs and quirks to work out. I was amused by the various accents that came through in the English-speaking version of Uncle Tim. At first he sounded American, but in subsequent conversations he took on an Australian accent. All this is powered by the Pixel 10's Tensor G5 chip and processed on-device using "a new codec and semantic understanding," according to Zack, to understand the speaker's vocal expressions. For now, I see what Google is going for and cannot wait to call my friend in Frankfurt. At launch, Voice Translate will support translating to or from English with Spanish, German, Japanese, French, Hindi, Italian, Portuguese, Swedish, Russian and Indonesian. The recorder app, translation and expressive-sounding AI are areas Google has long proven expertise in. And lest we forget, the company has also been a pioneer in suggesting actions from your emails and adding events to your calendar by scanning your inbox. With the Pixel 10's Magic Cue feature, Google is basically bringing this functionality to your texts and calls. While Magic Cue can helpfully show shortcuts within the Messages app to help you answer questions about reservations or send photos from recent trips, I'm most into one specific aspect. When you call an airline to make changes to a flight, for instance, the Pixel 10 can pull up your reservation information and display it within the call, so you won't have to open your email, and search for the booking confirmation to have your reference number ready. Sure, it might only save you seconds, but it's so much easier, and Google already does a version of this in your inbox. I would love to see this particular feature expand and cover other types of appointments so you can quickly get codes or other identifying information during calls to, say, your plumber, doctor, insurance provider and more. Google continues to improve upon areas it's led the way in, and photography remains a strength of Pixel phones. The company was one of the first major players to use its algorithmic prowess to dramatically improve the quality of low light photos and with the Pixel 10 Pro it again uses computational processing to deliver superior images. Pro Res Zoom on the new phone did manage to produce some surprisingly clean pictures of faraway buildings, at least in my demo at Google's Manhattan office. I was impressed by how clear the lines on the underside of a skyscraper that we zoomed to a 100x level on looked. Google was also careful to clarify that Pro Res Zoom won't work on people, and that distant text may look odd. "We've tuned Pro Res Zoom to minimize hallucinations, however they may still occur -- especially with faraway text. Additionally, when Pro Res Zoom detects a person in the scene, we use a different enhancement algorithm that prevents inaccurate representations," according to Google. in those situations, the algorithm will drop to Super Res Zoom quality. Depending on which Pixel phone you're using, Super Res Zoom delivers up to either 20x or 30x zoom. In the results I saw, people standing on a deck at the top of a tower just seemed a bit pixelated compared to the building's facade, and the effect wasn't jarring or even really noticeable until I zoomed in. But that might be because they were a tiny part of the picture -- I imagine things would look different if a person was the main subject in a scene. As someone who enjoys composing pictures, I didn't think the Camera Coach feature would do anything for me. But I was pleasantly surprised that I actually liked some of the AI's proposed framing options. I still don't think I'll use this much in the real world, but it might help other people who want tips on photography. I was initially nonplussed about the new Photos feature that lets you tell the AI how to edit your pictures, but after a brief demo I came around. Simply telling Gemini to "turn that red dress blue" or "get rid of the people in the background" was not only easier, but suprrisingly effective. I also want to point out that Google also made tweaks to the Guided Frame feature in its camera app that helps those who are blind or visually impaired know what is in the scene. It now uses Gemini models, which should help with object recognition. Finally, it's worth calling out the support for C2PA content authenticity initiative. Google is building this into the Photos app, where metadata will show whether or not AI was used in a picture. The Pixel 10 phones will be the first to implement the new industry-standard Content Credentials (CR) within its native camera app, and companies like Adobe, Amazon, Google, Meta, Microsoft, OpenAI are all part of the initiative. Those were just a slice of the new AI-related features I was impressed by at my recent demos ahead of Google's event this week. But there are quite a few more I found promising, like visual overlays in Gemini Live and the new Pixel Journal app. I didn't spend as much time with either, but they worked in my brief demos. So did the "take a message" feature that will send transcriptions of voicemails to you, which seems like a much better way to be alerted to a missed call than a hidden section of the Phone app. I'm not yet sold on the Daily Hub, which is basically an updated version of the existing pages that sit to the left of the home page showing relevant actions and articles you might want to explore. I'm fairly intentional when it comes to looking for things to consume, and have specific apps I prefer for doomscrolling (Reddit over everything), so I'm not sure Daily Hub will suit me. Still, the fact that I liked the bulk of the new AI features coming to the Pixel 10 series is pretty significant. Of course, I will still reserve judgement until I can spend more time with them in the real world, and hope to write reviews of some of them. But it's clear from my time with demos of the Pixel 10 that Google has been pretty thoughtful about how it imbues its hardware with AI, and I hope its competitors take notes.
[4]
Want to take better photos? Google thinks AI is the answer.
I'm no great photographer, but I still get lots of questions about how to frame up a photo just right. Google's new Pixel 10 phones can help with that too. For years, Google's Pixels have been also-rans in the U.S. smartphone market, where Apple and Samsung continue to dominate. But those who do buy them have been loud in its praise of their cameras, which this year got a potent boost from artificial intelligence. Though maybe not how you'd expect. Earlier Pixel models were more than happy to tweak your images for you. Now, Google is trying to use AI to make you better at capturing and editing those images yourself. The new photography features built into the Pixel 10 ($799+), 10 Pro ($999+) and 10 Pro XL ($1,199+), launching on Aug. 28, are part of Google's big push to put more smarts in smartphones by tapping its AI technology. Your photos might well look better after using them. We'll put them to a proper test soon enough -- for now, here's how Google's more collaborative take on AI-supported photography works. In this article Maximum zoom Return to menu Google's new phones look as sleek as ever but pack the AI equivalent of an enormous telephoto lens. With a lot of help from AI you can zoom in on a scene by 30x on the company's entry-level Pixel 10, and up to 100x on its slablike Pro models. Like any other phone, when you zoom in beyond a certain point, these Pixel phones mostly see a blurry mess. But Google then cleans it up using an on-device AI model to "recover" texture and detail based on that model's training and what's still visible in the original image. In the end, though, these phones are just painting in details calculated to be a good fit. Put another way, you're getting AI's interpretation of the world -- not necessarily what's really there. For photos of buildings and landmarks, this might not be so bad: AI has gotten pretty good at replicating regular lines and geometry and if you're not trying to preserve perfect architectural detail you may well be happy with the result. But consider this humble can of seltzer we photographed from the far end of a spacious demo room. On the left is what the camera's sensor "sees." Not much of anything. On the right is what Google's AI guessed should be there. The results show the limits of the fake-it-to-photograph-it approach. The branding is mostly right, but the company's algorithms appear to have little idea of what to do with the finer surrounding details. Features like this aren't completely new. Samsung's Ultra-branded smartphones have had a 100x "Space Zoom" feature for years and have previously drawn criticism for producing images that were at least a little out-of-step with reality. I've also long wondered about the potential for these enhanced-zoom smartphone cameras to become helpful tools for creeps. Snapping photos of people covertly is a lot easier to do when you're using a phone instead of a camera with a massive lens. Google has an (attempted) fix for this: The camera on Pixel 10 phones largely refuses to enhance people appearing in those hyper-zoomed photos. In photos where, say, a landmark or city street is the clear focal point, Google's AI model will sharpen and enhance everything but the humans milling around. And if you zoom in 100x onto a person's face, the phones simply refuse to help. When I urged a Google employee to take a close-up of my mug from across the room, the phone instead spat out a detailed image of, uh, just my hand. I can get behind the idea of fewer AI-distorted faces on the internet, though I'm sure lots of people wouldn't want other extremities sneakily photographed at a distance either. This feature deserves some rigorous testing. I just wish the Pixel phone had explained why it happened as I was using the feature. A better 'Best Take' Return to menu My colleague Geoffrey Fowler once called Google's Best Take "a nifty superpower for the family photographer" for the way it stitches together faces from different photos into a single cheery composite with nary a closed eyelid or squinched nose. He's got a point: The results can be pretty and pleasant compared to the originals, where subjects could be blinking or burping or looking away, even though these composite images depict a moment that technically may have never occurred. Google is tweaking that approach with its latest Pixels, making them more faithful to reality. The new phones will still cobble together a smiley amalgam if needed. But if they can see you appear to be setting up for a group shot, they will process extra images slightly before and after you even hit the button to snap the photo. If one of those extra shots happens to capture the magical moment with everyone looking just right, you can choose to save that one in lieu of an AI composite. AI photo coaching Return to menu Google's new phones also debut an AI photography feature that kicks in long before you press the shutter button. With the tap of the screen, you can launch a "Camera Coach" tool that scans what the camera can see to give advice on how to frame an image. Once that momentary scan is complete, the Pixel will offer suggestions like "full-length portrait" or "urban oasis detail," previewing what the final image could look like. Tap your favorite suggestion, and the real hand-holding starts: On-screen prompts will steer you to select specific photo modes, walk around your subject and move the phone to cut unwanted objects from the frame. I haven't been thrilled by the coaching I've seen so far, but I could see the feature genuinely helping some amateurs. But my advice for folks seriously interested in taking better photos is that they should spend a little time getting to know their camera of choice -- even if it is a phone -- and poring over images they find personally arresting to understand why. That said, I'm almost certainly going to make my parents use Camera Coach all the time. Edit with words, not sliders Return to menu Editing photos is as much an art form as taking good ones in the first place, but getting started can feel daunting. Open up those menus and you'll soon be staring at sliders and toggles with names you've never heard of. If you'd rather not swim through all those settings, the new Pixel phones come with a new version of the Google Photos app that will let you just type or speak the edits you want to make. You don't need to know any technical lingo, either. The tool can field multipart requests like "remove the glare and brighten the photo" as well as dead-simple ones like "make the photo look better." In early demos, I've used the feature to make very specific changes, like cropping images vertically for social media and swapping out existing backdrops for AI-generated views of starry space. Thankfully, it's a little harder now to pass off images edited with AI as fully genuine. Google's new phones are the first with so-called "content credentials" built-in, so edited images will contain hidden data explaining exactly how they've been tweaked. Some of these AI photography features may leave you feeling kind of icky, especially since Google is still happy to take some liberties with objective reality. But Google, a company that once unilaterally decided our photos should look like Caravaggio paintings, could have gone a lot further. It's been a pleasant surprise that the company appears to realize that maybe the best way to use AI in photography is to keep it from making too many decisions for us.
[5]
The Pixel 10's Camera Might Actually Be Worse Than the Pixel 9's (for Some Users)
New smart phones are better than old smart phones, right? It should go without saying that upgrading is how you get faster processors, bigger screens, and better cameras. Except, it seems, in the case of the Pixel 10. While Google announced today that it's finally adding a telephoto lens to the Pixel 10's base model, it also quietly snuck in that it's making the existing ultra wide and wide lenses worse. Depending on how you use your phone's camera, that could actually make the Pixel 10 a downgrade for you. The regular Pixel 10's rear camera, on the surface, can finally play with the big boys. While three-lens rear cameras were previously for Pro models only, Google has given every model of the Pixel 10 a triple lens setup. However, that's come with some cuts to megapixel count for the base model. While that caveat is to be expected, it's unfortunately come at the cost of making the Pixel 10 not seem so shiny next to its predecessor -- at least in one key area. There are technically two camera downgrades here. First, the main lens, or the wide camera, is now 48 MP instead of 50 MP. However, it's also got a larger image sensor and larger aperture, and it's worth noting that the resolution loss here is small enough that those other changes could make up for it. I'll let you know if my opinion is any different once I get some hands-on time with the Pixel 10's camera, but for now, my bigger concern is with the ultra wide lens. The Pixel 10's ultra wide lens, which lets you zoom out to take shots with a wider field of view, is now only 13 MP, as opposed to 48 MP on the Pixel 9. That's a much more egregious difference, and beyond the impact to quality, the new lens will also lower the max field-of-view you can get from 123 degrees to 120 degrees. Is that sacrifice worth it for the new telephoto lens? At 10.8 MP, it's not as strong as the 48 MP telephoto lens on a Pixel 10 Pro, although it does give your phone 5x optical zoom as opposed to the 2x optical zoom on the Pixel 9, which means you can zoom in further before your phone has to start pulling in software tricks to compensate. So it all depends on how you take your photos. Have you ever needed to take a photo of something nearby, but you just couldn't fit the whole subject in frame, and there also wasn't enough room to physically move your camera further back? That's when an ultra wide lens might come in handy. Most phones are equipped with them these days, and you use them by clicking the "0.5x" zoom button. That'll pull out the field of view to fit more of your subject in frame, but not all ultra wide lenses are made equal. I haven't gotten hands-on time with the Pixel 10's camera yet, but for the sake of comparison, here are a few ultra wide shots taken with the Pixel 9's 48 MP lens: And a few shots taken with my iPhone 15 Pro's 12 MP Ultra Wide lens, which is the closest I have on hand to what you might expect from the Pixel 10 (although note that Apple's powerful post-processing might make up for some blemishes): It's not apples to apples as to the kind of quality drop you might see on the Pixel 10 (I'll be sure to do a more direct comparison in my review when I've used the phone's camera myself), but if you're a certain type of shutterbug, I think you can already see the difference. As someone who needs to do a good deal of photography while stuck in crowds, it'd be a shame to lose that kind of extra fidelity when I'm in a pinch and can't move my camera itself. The Pixel 9's ultra wide lens is almost on par with its main camera, which is hard to beat for people who need it. It could also be a loss for anyone who likes to take photos of landscapes, but doesn't want to bother with a panorama shot. Vacationers will be among those who'll want to double check on what they'll be missing out on before "upgrading." It's not all doom and gloom, though, and Google isn't making these changes for no reason. With a telephoto lens on the standard Pixel 10, users will now be able to get a full optical zoom up to 5x, and an AI-assisted software zoom (called Super Res Zoom) up to 20x. That's a big improvement on the 2x optical zoom on the regular Pixel 9, and even that phone's 8x software zoom. At least, it is on paper. 10.8MP is a relatively small resolution for a phone camera, so I'll need to go a few generations back to give you an idea of what it might look like. Here are some 2x optical zoom shots on my Pixel 9: And similar shots at 3x optical zoom, again on my iPhone 15 Pro (which has a 12MP telephoto lens). For good measure, here are some 8x "Super Res Zoom" shots on the Pixel 9. As opposed to a traditional digital zoom, which essentially just crops images, and can make them blurrier, Super Res Zoom uses machine learning to combine details from frames taken at multiple zoom levels to try to create a better shot. There's no AI image generation here, as in the newly announced Pro Res Zoom feature that the Pixel 10 Pro is getting -- although it's worth acknowledging that detail here still isn't going to be quite as realistic as on an optical zoom, which relies purely on hardware. Personally, I really can see the difference in the iPhone's 3x zoom, so if you take photos from far away a lot, the Pixel 10 could be a worthwhile upgrade for you. While high resolution hardware is the most sure way to get a quality, realistic photo, computational photography can help. For instance, a smartphone might base most of a 0.5x image on data from the ultra wide sensor, but could pull in some data from the wide lens to help balance out detail. Google told me it's improved its sensor processing for the Pixel 10 and is confident most users won't notice a difference between the new and old ultra wide lenses, although I'll have to get some hands-on time with the new lens to form my own opinion. While the Pixel 9 uses a 48 MP ultra wide lens, its photos are saved in a 12.5 MP format by default, meaning some computational photography is already at play for most users. Ultra wide shots on the Pixel 9 still use data from the 48MP lens, but will merge similar pixels to reduce file size. Unfortunately, this is the only way to get Pixel post-processing on the base Pixel 9, whereas the Pro version of that phone comes with a 50MP photo mode that doesn't compress photos. Still, an image compressed down from 48MP is going to have more detail to work with than one shot at 13MP, and if you do want your Pixel 9 photos without compression, you can still download the RAW photo files and adjust them yourselves, for a complete 48 MP ultra wide look. I'm not an editing expert, so I've avoided doing that in this article, but I still think the Pixel 9's compressed ultra wide shots look better than the ones taken with my iPhone's 12 MP sensor, and it's good to know more talented photographers than can get high-res ultra wide shots on the Pixel 9. Whether the Pixel 10's camera will be a downgrade for you wholly depends on the kinds of photos you tend to take. Do you zoom in a lot, to try to capture details from far away? Then the Pixel 10 is an upgrade, with more optical zoom distance. But do you instead prefer a wider field of view, where you might be trying to fit more of a subject into a shot, as with a group photo or an interesting landscape? The Pixel 10 could actually be worse than the Pixel 9, especially if you're comfortable editing RAW photos. I'd love to give you a solid "yes" or "no," but in this case, the answer depends on what kind of user you are. If you're a certain kind of Android-using photographer, don't assume that newer means better here.
[6]
Google Is Quietly Building AI Into the Pixel Camera App, and It Worries Me
Google's Pixel 10 phones made their official debut this week, and with them, a bunch of generative AI features baked directly into the camera app. It's normal for phones to use "computational photography" these days, a fancy term for all those lighting and post-processing effects they add to your pics as you snap them. But AI makes computational photography into another beast entirely, and it's one I'm not sure we're ready for. Tech nerds love to ask ourselves "what is a photo?" kind of joking that the more post-processing gets added to a picture, the less it resembles anything that actually happened in real life. Night skies being too bright, faces having fewer blemishes than a mirror would show, that sort of thing. Generative AI in the camera app is like the final boss of that moral conundrum. That's not to say these features aren't all useful, but at the end of the day, this is kind of a philosophical debate as much as a technical one. Are photos supposed to look like what the photographer was actually seeing with their eyes, or are they supposed to look as attractive as possible, realism be damned? It's been easy enough to keep these questions to the most nitpicky circles for now -- who really cares if the sky is a little too neon if it helps your pic pop more? -- but if AI is going to start adding whole new objects or backgrounds to your photos, before you even open the Gemini app, it's time for everyone to start asking themselves what they want out of their phones' cameras. And the way Google is using AI in its newest phones, it's possible you could end up with an AI photo and not really know it. Maybe the most egregious of Google's new AI camera additions is what it's calling Pro Res Zoom. Google is advertising this as "100x zoom," and it works kind of like the wholly fictional "zoom in and enhance" tech you might see in old-school police procedurals. Essentially, on a Pixel 10 Pro or Pro XL, you'll now be able to push the zoom lens in by 100 times, and on the surface, the experience will be no different than a regular software zoom (which relies on cropping, not AI). But inside your phone's processor, it'll still run into the same problems that make "zoom in and enhance" seem so ludicrous in shows like CSI. In short, the problem is that you can't invent resolution the camera didn't capture. If you've zoomed in so far that your camera lens only saw vague pixels, then it will never be able to know for sure what was actually there in real life. That's why this feature, despite seeming like a normal, non-AI zoom on the surface, is more of an AI edit than an actual 100x zoom. When you use Pro Res Zoom, your phone will zoom in as much as it can, then use whatever blurry pixels it sees as a prompt for an on-device diffusion model. The model will then guess what the pixels are supposed to look like, and edit the result into your shot. It won't be capturing reality, but if you're lucky, it might be close enough. For certain details, like rock formations or other mundane inanimate objects, that might be fine. For faces or landmarks, though, you could leave with the impression that you just got a great close-up of, say, the lead singer at a concert, without knowing that your "zoom" was basically just a fancy Gemini request. Google says it's trying to tamp down on hallucinations, but if a photo spat out by Gemini is something you're uncomfortable posting or including in a creative project, this will have the same issues -- except that, because of the branding, you might not realize AI was involved. Luckily, Pro Res Zoom doesn't replace non-AI zoom entirely -- zooming in past the usual 5x hardware zoom limit will now give you two results to pick from, one with Pro Res Zoom applied and one without. I wrote about this in more detail if you're interested, but even with non-AI options available, the AI one isn't clearly indicated while you're making your selection. That's a much more casual approach to AI than Google's taken in the past. People might be used to AI altering their photos when they ask for it, but having it automatically applied through your camera lens is a new step. The casual AI integration doesn't stop once you've taken your photo, though. With Pixel 10, you can now use natural language to ask AI to alter your photos for you, right from the Google Photos app. Simply open up the photo you want to change, tap the edit icon, and you'll see a chat box that will let you use natural language to suggest tweaks to your photo. You can even speak your instructions rather than type them, if you want. On the surface, I don't mind this. Google Photos has dozens of different edit icons, and it can be difficult for the average person to know how to use them. If you want a simple crop or filter applied, this gives you an option to get that done without going through what could be an otherwise intimidating interface. The problem is, in addition to using old-school Google Photos tools, Ask to Edit will also allow you to suggest more outlandish changes, and it won't clearly delineate when it's using AI to accomplish those changes. You could ask the AI to swap out your photo's background for an entirely new one, or if you want a less drastic change, you could ask it to remove reflections from a shot taken through a window. The issue? Plenty of these edits will require generative AI, even the seemingly less destructive ones like glare elimination, but you'll have to use your intuition to know when it's been applied. For example, while you'll usually see an "AI Enhance" button among Google Photos' suggested edits, it's not the only way to get AI in your shot. Ask to Edit will do its best to honor whatever request you make, with whatever tools it has access to, and given some hands-on experience I had with it at a demo with Google, this includes AI generation. It might be obvious that it'll use AI to, say, "add a Mercedes behind me in this selfie," but I could see a less tech savvy user assuming that they could ask the AI to "zoom out" without knowing that changing an aspect ratio without cropping also requires using generative AI. Specifically, it requires asking an AI to imagine what might have surrounded whatever was in your shot in real life. Since it has no way of knowing this, it comes with an inherently high risk of hallucination, no matter how humble "zoom out" sounds. Since we're talking about a tool designed to help less tech-literate users, I worry there's a good chance they could accidentally wind up generating fiction, and think it's a totally innocent, realistic shot. Then there's Camera Coach. This feature also bakes AI into your Camera app, but doesn't actually put AI in your photos. Instead, it uses AI to suggest alternate framing and angles for whatever your camera is seeing, and coaches you on how to achieve those shots. In other words, it's very what-you-see-is-what-you-get. Camera Coach's suggestions are just ideas, and even though following through on them takes more work on your end, you can be sure that whatever photo you snap is going to look exactly like what you saw in your viewfinder, with no AI added. That pretty much immediately erases most of my concerns about unreal photos being presented as absolute truth. There is the possibility that Camera Coach might suggest a photo that's not actually possible to take, say if it wants you to walk into a restricted area, but the worst you're going to get there is frustration, not a photo that passes off AI generation as if it's the same as, say, zooming in. I'm not going to solve the "what is a photo?" question in one afternoon. The truth is that some photos are meant to represent the real world, and some are just supposed to look aesthetically pleasing. I get it. If AI can help a photo look more visually appealing, even if it's not fully true-to-life, I can see the appeal. That doesn't erase any potential ethical concerns about where training data comes from, so I'd still ask you to be diligent with these tools. But I know that pointing at a photo and saying "that never actually happened" isn't a rhetorical magic bullet. What worries me is how casually Google's new AI features are being implemented, as if they're identical to traditional computational photography, which still always uses your actual image as a base, rather than making stuff up. As someone who's still wary of AI, seeing AI image generation disguised as "100x zoom" immediately raises my alarm bells. Not everyone pays attention to these tools the way I do, and it's reasonable for them to expect that these features do what they say on the tin, rather than introducing the risk of hallucination. In other words, people should know when AI is being used in their photos, so that they can be confident when their shots are realistic, and when they're not. Referring to zoom using a telephoto lens as "5x zoom" and zoom that layers AI over a bunch of pixels as "100x zoom" doesn't do that, and neither does building a natural language editor into your Photos app that doesn't clearly tell you when it's using generative AI and when it isn't. Google's aware of this problem. All photos taken on the Pixel 10 now come with C2PA content credentials built-in, which will say whether AI was used in the photo's metadata. But when's the last time you actually checked a photo's metadata? Tools like Ask to Edit are clearly being made to be foolproof, and expecting users to manually scrub through each of their photos to see which ones were edited with AI and which weren't isn't realistic, especially if we're making tools that are specifically supposed to let users take fewer steps before getting their final photo. It's normal for someone to expect AI will be used when they open the Gemini app, but including it in previously non-AI tools like the Camera app needs more fanfare than quiet C2PA credentials and one vague sentence in a press release. Notifying a user when they're about to use AI should happen before they take their photo, or before they make their edit. It shouldn't be quietly marked down for them to find later, if they choose to go looking for it. Other AI photo tools, like those from Adobe, already do this, through a simple watermark applied to any project using AI generation. While I won't tell you what to think about AI generated images overall, I will say that you shouldn't be put in a position where you're making one by accident. Of Google's AI camera innovations, I'd say Camera Coach is the only one that does that. For a big new launch from the creator of Android, an ecosystem Google proudly touted as "open" during this year's Made by Google, a one out of three hit rate on transparency isn't what I'd expect.
Share
Copy Link
Google's new Pixel 10 series introduces AI-powered camera features, including enhanced zoom capabilities and AI-assisted photography, but also raises questions about image authenticity and potential downgrades in some camera specs.
Google's latest Pixel 10 series, including the Pixel 10, Pixel 10 Pro, and Pixel 10 Pro Fold, showcases significant advancements in smartphone camera technology, heavily leveraging artificial intelligence (AI) 1. The new devices, powered by the Google Tensor G5 processor co-designed with Google DeepMind, introduce a range of AI-driven features aimed at enhancing the photography experience 2.
Source: Lifehacker
One of the most notable features is Pro Res Zoom, which pushes the zoom capabilities of the Pixel 10 Pro to an impressive 100x 1. This feature uses generative AI to refine and rebuild areas of the image based on the underlying pixels captured by the camera sensor. While this technology promises crisp lines and features at extreme zoom levels, it also raises questions about image authenticity 1.
Google's Isaac Reynolds, group product manager for Pixel cameras, assures that "every image you see should be truly authentic to the real photo," despite initial concerns about visual artifacts 1. The feature works best on static scenes and avoids reconstructing faces or people to maintain authenticity 1.
The Pixel 10 series introduces several AI-powered features to assist users in capturing and editing photos:
Camera Coach: This feature uses Gemini's multimodal capabilities to suggest angles, lighting, and camera modes, even generating sample images for users to aim for 2.
Best Take and Auto Best Take: These features analyze multiple frames to create a perfect group photo where everyone looks their best 12.
AI-powered editing: Users can describe desired changes using natural language, and the AI will make the adjustments instantaneously 2.
Google has also implemented AI in audio-related features:
Voice Translation: This feature allows real-time translation during phone calls, mimicking the speaker's voice for a more natural conversation 23.
AI-generated music for recordings: The Recorder app can now add AI-generated backing tracks to voice recordings, though the results may vary in quality 3.
Despite the innovative features, the Pixel 10 series has sparked some controversies:
Camera Downgrades: Some reports suggest that the base Pixel 10 model may have lower megapixel counts in its ultra-wide and wide lenses compared to its predecessor, potentially affecting image quality in certain scenarios 5.
Image Authenticity: The use of AI in extreme zoom and image enhancement raises questions about the authenticity of the resulting photos, especially in cases where details are being generated rather than captured 4.
Privacy Concerns: The enhanced zoom capabilities have prompted discussions about potential misuse for covert photography, though Google has implemented measures to limit AI enhancement of human subjects in such scenarios 4.
Source: CNET
While Google's Pixel phones have historically been praised for their camera capabilities, they remain underdogs in the smartphone market dominated by Apple and Samsung 4. The new AI features represent Google's attempt to differentiate its products and appeal to both amateur and enthusiast photographers 124.
Source: ZDNet
As these devices hit the market, it remains to be seen how users will respond to the balance between AI-enhanced capabilities and concerns about image authenticity. The true test will come as reviewers and consumers get hands-on experience with the new Pixel 10 series and its AI-powered camera features 345.
Summarized by
Navi
[4]
Apple is in early talks with Google to potentially use Gemini AI for a Siri revamp, signaling a shift in Apple's AI strategy as it faces delays in its own development efforts.
18 Sources
Technology
19 hrs ago
18 Sources
Technology
19 hrs ago
As artificial intelligence becomes increasingly integrated into daily activities, concerns arise about its substantial energy consumption and environmental impact, prompting experts to suggest ways to mitigate these effects.
8 Sources
Technology
19 hrs ago
8 Sources
Technology
19 hrs ago
Meta has announced a partnership with Midjourney to license and integrate the startup's AI image and video generation technology into its future models and products, signaling a shift in Meta's AI strategy.
9 Sources
Technology
19 hrs ago
9 Sources
Technology
19 hrs ago
Elon Musk announces the creation of 'Macrohard', an AI-focused software company aimed at challenging Microsoft's dominance in the tech industry.
3 Sources
Technology
19 hrs ago
3 Sources
Technology
19 hrs ago
NVIDIA CEO Jensen Huang confirms the development of the company's most advanced AI architecture, 'Rubin', with six new chips currently in trial production at TSMC.
2 Sources
Technology
11 hrs ago
2 Sources
Technology
11 hrs ago