2 Sources
2 Sources
[1]
Unexpectedly, a deer briefly entered the family room": Living with Gemini Home
You just can't ignore the effects of the generative AI boom. Even if you don't go looking for AI bots, they're being integrated into virtually every product and service. And for what? There's a lot of hand-wavey chatter about agentic this and AGI that, but what can "gen AI" do for you right now? Gemini for Home is Google's latest attempt to make this technology useful, integrating Gemini with the smart home devices people already have. Anyone paying for extended video history in the Home app is about to get a heaping helping of AI, including daily summaries, AI-labeled notifications, and more. Given the supposed power of AI models like Gemini, recognizing events in a couple of videos and answering questions about them doesn't seem like a bridge too far. And yet Gemini for Home has demonstrated a tenuous grasp of the truth, which can lead to some disquieting interactions, like periodic warnings of home invasion, both human and animal. It can do some neat things, but is it worth the price -- and the headaches? Does your smart home need a premium AI subscription? Simply using the Google Home app to control your devices does not turn your smart home over to Gemini. This is part of Google's higher-tier paid service, which comes with extended camera history and Gemini features for $20 per month. That subscription pipes your video into a Gemini AI model that generates summaries for notifications, as well as a "Daily Brief" that offers a rundown of everything that happened on a given day. The cheaper $10 plan provides less video history and no AI-assisted summaries or notifications. Both plans enable Gemini Live on smart speakers. According to Google, it doesn't send all of your video to Gemini. That would be a huge waste of compute cycles, so Gemini only sees (and summarizes) event clips. Those summaries are then distilled at the end of the day to create the Daily Brief, which usually results in a rather boring list of people entering and leaving rooms, dropping off packages, and so on. Importantly, the Gemini model powering this experience is not multimodal -- it only processes visual elements of videos and does not integrate audio from your recordings. So unusual noises or conversations captured by your cameras will not be searchable or reflected in AI summaries. This may be intentional to ensure your conversations are not regurgitated by an AI. Paying for Google's AI-infused subscription also adds Ask Home, a conversational chatbot that can answer questions about what has happened in your home based on the status of smart home devices and your video footage. You can ask questions about events, retrieve video clips, and create automations. There are definitely some issues with Gemini's understanding of video, but Ask Home is quite good at creating automations. It was possible to set up automations in the old Home app, but the updated AI is able to piece together automations based on your natural language request. Perhaps thanks to the limited set of possible automation elements, the AI gets this right most of the time. Ask Home is also usually able to dig up past event clips, as long as you are specific about what you want. The Advanced plan for Gemini Home keeps your videos for 60 days, so you can only query the robot on clips from that time period. Google also says it does not retain any of that video for training. The only instance in which Google will use security camera footage for training is if you choose to "lend" it to Google via an obscure option in the Home app. Google says it will keep these videos for up to 18 months or until you revoke access. However, your interactions with Gemini (like your typed prompts and ratings of outputs) are used to refine the model. The unexpected deer Every generative AI bot makes the occasional mistake, but you'll probably not notice every one. When the AI hallucinates about your daily life, however, it's more noticeable. There's no reason Google should be confused by my smart home setup, which features a couple of outdoor cameras and one indoor camera -- all Nest-branded with all the default AI features enabled -- to keep an eye on my dogs. So the AI is seeing a lot of dogs lounging around and staring out the window. One would hope that it could reliably summarize something so straightforward. One may be disappointed, though. In my first Daily Brief, I was fascinated to see that Google spotted some indoor wildlife. "Unexpectedly, a deer briefly entered the family room," Gemini said. Gemini does deserve some credit for recognizing that the appearance of a deer in the family room would be unexpected. But the "deer" was, naturally, a dog. This was not a one-time occurrence, either. Gemini sometimes identifies my dogs correctly, but many event clips and summaries still tell me about the notable but brief appearance of deer around the house and yard. This deer situation serves as a keen reminder that this new type of AI doesn't "think," although the industry's use of that term to describe simulated reasoning could lead you to believe otherwise. A person looking at this video wouldn't even entertain the possibility that they were seeing a deer after they've already seen the dogs loping around in other videos. Gemini doesn't have that base of common sense, though. If the tokens say deer, it's a deer. I will say, though, Gemini is great at recognizing car models and brand logos. Make of that what you will. The animal mix-up is not ideal, but it's not a major hurdle to usability. I didn't seriously entertain the possibility that a deer had wandered into the house, and it's a little funny the way the daily report continues to express amazement that wildlife is invading. It's a pretty harmless screw-up. "Overall identification accuracy depends on several factors, including the visual details available in the camera clip for Gemini to process," explains a Google spokesperson. "As a large language model, Gemini can sometimes make inferential mistakes, which leads to these misidentifications, such as confusing your dog with a cat or deer." Google also says that you can tune the AI by correcting it when it screws up. This works sometimes, but the system still doesn't truly understand anything -- that's beyond the capabilities of a generative AI model. After telling Gemini that it's seeing dogs rather than deer, it sees wildlife less often. However, it doesn't seem to trust me all the time, causing it to report the appearance of a deer that is "probably" just a dog. A perfect fit for spooky season Gemini's smart home hallucinations also have a less comedic side. When Gemini mislabels an event clip, you can end up with some pretty distressing alerts. Imagine that you're out and about when your Gemini assistant hits you with a notification telling you, "A person was seen in the family room." A person roaming around the house you believed to be empty? That's alarming. Is it an intruder, a hallucination, a ghost? So naturally, you check the camera feed to find... nothing. An Ars Technica investigation confirms AI cannot detect ghosts. So a ghost in the machine? On several occasions, I've seen Gemini mistake dogs and totally empty rooms (or maybe a shadow?) for a person. It may be alarming at first, but after a few false positives, you grow to distrust the robot. Now, even if Gemini correctly identified a random person in the house, I'd probably ignore it. Unfortunately, this is the only notification experience for Gemini Home Advanced. "You cannot turn off the AI description while keeping the base notification," a Google spokesperson told me. They noted, however, that you can disable person alerts in the app. Those are enabled when you turn on Google's familiar faces detection. Gemini often twists reality just a bit instead of creating it from whole cloth. A person holding anything in the backyard is doing yardwork. One person anywhere, doing anything, becomes several people. A dog toy becomes a cat lying in the sun. A couple of birds become a raccoon. Gemini likes to ignore things, too, like denying there was a package delivery even when there's a video tagged as "person delivers package." At the end of the day, Gemini is labeling most clips correctly and therefore produces mostly accurate, if sometimes unhelpful, notifications. The problem is the flip side of "mostly," which is still a lot of mistakes. Some of these mistakes compel you to check your cameras -- at least, before you grow weary of Gemini's confabulations. Instead of saving time and keeping you apprised of what's happening at home, it wastes your time. For this thing to be useful, inferential errors cannot be a daily occurrence. Learning as it goes Google says its goal is to make Gemini for Home better for everyone. The team is "investing heavily in improving accurate identification" to cut down on erroneous notifications. The company also believes that having people add custom instructions is a critical piece of the puzzle. Maybe in the future, Gemini for Home will be more honest, but it currently takes a lot of hand-holding to move it in the right direction. With careful tuning, you can indeed address some of Gemini for Home's flights of fancy. I see fewer deer identifications after tinkering, and a couple of custom instructions have made the Home Brief waste less space telling me when people walk into and out of rooms that don't exist. But I still don't know how to prompt my way out of Gemini seeing people in an empty room. Despite its intention to improve Gemini for Home, Google is releasing a product that just doesn't work very well out of the box, and it misbehaves in ways that are genuinely off-putting. Security cameras shouldn't lie about seeing intruders, nor should they tell me I'm lying when they fail to recognize an event. The Ask Home bot has the standard disclaimer recommending that you verify what the AI says. You have to take that warning seriously with Gemini for Home. At launch, it's hard to justify paying for the $20 Advanced Gemini subscription. If you're already paying because you want the 60-day event history, you're stuck with the AI notifications. You can ignore the existence of Daily Brief, though. Stepping down to the $10 per month subscription gets you just 30 days of event history with the old non-generative notifications and event labeling. Maybe that's the smarter smart home bet right now. Gemini for Home is widely available for those who opted into early access in the Home app. So you can avoid Gemini for the time being, but it's only a matter of time before Google flips the switch for everyone. Hopefully it works better by then.
[2]
AI Is Making a Lot of Big Promises, but It Can't Even Properly Identify My Cat
Experiencing a new level of intensely close monitoring -- and the AI-hallucination chaos that ended up snowballing -- left me slightly creeped out. And I'm more than a little annoyed that Google seems to expect me, and other camera owners, to do the hard work of training Gemini, while paying for my own labor. The promise of Gemini, according to Google, will be felt across the smart home, as it brings more natural and conversational voice interactions with smart speakers, fluid integration with other smart devices, and new features such as annotated descriptions and notifications for video recordings and the ability to use text or voice searches to locate specific details in video recordings. I installed the Google Nest Cam Outdoor (Wired, 2nd Gen) outside facing my driveway, and I put the Google Nest Cam Indoor (Wired, 3rd Gen) in my living room. I placed an older indoor Google Nest camera (which also received a Gemini boost) in my kitchen. I enabled Familiar Face alerts (an optional facial-recognition feature) and then also opted in to an early release of Gemini for Home, as well as a $20-per-month Advanced subscription (you do that part online), which adds AI-generated descriptions to smartphone notifications of video events, annotations of the action in the videos, the daily Home Brief, and the option to perform an Ask Home video search. Some good news: Automations work pretty well. Using the microphone icon on the Ask Home bar in the app, I told Gemini: "Every time someone walks in front of the living room camera after 6:00 p.m., turn on the deck lights." It created the task but still required a few manual clicks, and you can't delete or disable automations via Gemini yet. Overall, though, the experience is an improvement on a platform that always seemed wonky, and it did work. Less happy news: When it comes to Gemini's AI powers of observation, there's still a lot of hard work to be done. Things quickly went from being invasive but slightly boring to overwhelming and completely bananas. On the first day, the results were fairly typical but also a bit TMI. With pre-Gemini Nest cameras, motion would trigger a basic smartphone alert like "Person detected." With the new camera, I immediately got a more detailed description with Gemini's interpreted context ("Person walks into the room.") Since I had enabled Familiar Face detection and labeled my family's faces, notifications instantly became more specific, like "Rachel walks downstairs." Gemini was just getting warmed up. Within a day descriptions continued to add detail: "Person drinks water in the kitchen," "cat jumps on couch," or "cat plays with toy." With three cameras inside and outside my home, I was soon on the receiving end of a firehose of alerts: when items were tossed in the trash cans, when delivery people arrived, and when my cat groomed himself on top of my couch. For giggles I threw on a disguise to see what would happen, and Gemini impressively and accurately described the event as "A person wearing a cat mask and a black shirt walks into the room." Over just a couple of days of use, Gemini sent me dozens of banal descriptions about my household. But soon the hallucinations kicked in. Gemini notified me that our cat was scratching the couch and also that my husband was walking into the room, but a look at the footage showed that Kitty was innocent of the crime, and my husband was nowhere to be seen. Then there were descriptions that my husband and I were relaxing "along with others," though unless Gemini is able to see poltergeists, we were the only two in the house. Afterward, Gemini perceived my definitely orange cat as being several other colors, which made the AI assume I had a multitude of cats. And then Gemini insisted I had a dog, as well as a chipmunk, a hamster, and mice -- oh, lots of mice. To Gemini's mind, we were infested. This came as a great surprise to us (not least of all our cat). The Google Home app allows you to provide feedback on each video clip to improve accuracy, but only through the not-very-scientific thumbs-up/thumbs-down model. Over the course of three weeks, my cameras captured an average of 300 events on any given day. I can't imagine anyone having the time or patience to keep up with that. Although this whole experience was a source of entertainment for my co-workers and me, it quickly turned to concern. Security cameras aren't supposed to be funny or entertaining. They're safety devices tasked with protecting people and their possessions. People rely on cameras to be unbiased observers, but the addition of AI that interprets what it sees introduces bias -- and if it's inaccurate, it's no longer useful. Gemini for Home, in its current form, is plagued by AI hallucinations, constantly prone to mislabeling people, colors, activities, objects, and animals, and that makes it fundamentally untrustworthy. And the less you trust your security device, the less you rely on it, even when you absolutely should. We weren't the only ones having experiences like this. (As I write this, my Nest camera just labeled my 6-foot-3 husband as a child and claimed that his armful of laundry was a baby.) At best, Google's Gemini for Home is a rough beta program that's unfocused and unreliable -- and that makes it potentially dangerous when implemented on a security device. A Google spokesperson told us: "Gemini for Home (including AI descriptions, Home Brief, and Ask Home) is in early access, so users can try these new features and continue giving us feedback as we work to perfect the experience. As part of this, we are investing heavily in improving accurate identification. This includes incorporating user-provided corrections to generate more accurate AI descriptions. Since all Gemini for Home features rely on our underlying Familiar Faces identification, improving this accuracy also means improving the quality of Familiar Faces. This is an active area of investment and we expect these features to keep improving over time." We'll continue to test Gemini for Home, but for now we can't recommend it to anyone who relies on these cameras for security or peace of mind.
Share
Share
Copy Link
Google's new Gemini Home AI service, part of a $20/month subscription, promises smart home automation and video analysis but consistently misidentifies pets as deer, creates phantom intruders, and generates unreliable security alerts through widespread AI hallucinations.
Google's latest foray into AI-powered smart home technology, Gemini Home, is generating more confusion than convenience for users willing to pay $20 per month for the premium service. The AI-enhanced home monitoring system, designed to provide intelligent summaries and automated responses to security camera footage, has demonstrated a troubling pattern of misidentifying common household objects and pets
1
.
Source: Ars Technica
The service integrates Gemini AI with existing Google Nest cameras and smart home devices, promising daily summaries, AI-labeled notifications, and conversational search capabilities. However, real-world testing reveals significant gaps between the technology's promises and its performance
2
.Users report consistent problems with object recognition that go beyond occasional errors. In one documented case, Gemini repeatedly identified dogs as deer, leading to daily summaries that included phrases like "Unexpectedly, a deer briefly entered the family room"
1
. The AI system acknowledged the unusual nature of indoor deer sightings but failed to correct its fundamental misidentification.Similar issues plague cat recognition, with the system perceiving a single orange cat as multiple animals of different colors, leading to false reports of dogs, chipmunks, hamsters, and mice infestations
2
. These hallucinations extend beyond animals to include phantom human presences, with the AI reporting people in rooms when security footage shows no one present.
Source: The New York Times
Google's implementation processes only visual elements from security cameras, deliberately excluding audio to protect user privacy. The system analyzes event clips rather than continuous footage, generating summaries that feed into daily briefs
1
. While this approach reduces computational costs and privacy risks, it hasn't prevented the widespread recognition failures.The service retains video footage for 60 days and claims not to use customer videos for training purposes, except when users explicitly opt in through an obscure setting. However, user interactions with Gemini, including prompts and feedback ratings, are used to refine the model
1
.Users report receiving hundreds of notifications daily, with detailed descriptions of mundane activities like "Person drinks water in the kitchen" or "cat jumps on couch"
2
. This flood of information, combined with frequent inaccuracies, creates a challenging user experience that defeats the purpose of automated monitoring.The system's feedback mechanism relies on simple thumbs-up or thumbs-down ratings for each video clip, but with 300 daily events on average, users find it impractical to provide meaningful training data
2
.Related Stories
Despite recognition problems, Gemini Home shows promise in automation creation. The system successfully interprets natural language requests to create smart home routines, such as "Every time someone walks in front of the living room camera after 6:00 p.m., turn on the deck lights"
2
. This functionality represents an improvement over previous Google Home automation tools, though users still need manual confirmation for setup.The reliability issues raise serious concerns about the system's effectiveness as a security tool. When AI consistently misidentifies objects and creates false alerts, users may lose trust in legitimate warnings. Security cameras serve as unbiased observers, but AI interpretation introduces bias and potential blind spots that could compromise home safety
2
.Summarized by
Navi
[2]
1
Business and Economy

2
Business and Economy

3
Technology
