2 Sources
2 Sources
[1]
Meta Explained Its Smart Glasses AI Privacy Policies to Me. I'm Still Worried
Nearly 20 years writing about tech, and over a decade reviewing wearable tech, VR, and AR products and apps I wear Meta's Ray-Bans off and on when I travel to snap photos, take phone calls and listen to music. The technology is fascinating, fun and convenient. I also knew that Meta's privacy policies might be a concern, but now I'm more worried about it than ever before. My concerns ramped up after a number of friends and colleagues shared a report about Meta's third-party contractors in Kenya being able to view sensitive information like photos of banking records, nudity and sexual encounters that had been recorded on Meta glasses (which has resulted in a class action lawsuit). What boundaries had Meta set up to protect people's privacy? I pored over Meta's terms of service online and in the Meta AI app, but that was no help. I wanted some answers. So I contacted Meta's comms team to get clarity. But even after getting the official answer from Meta about where the lines are drawn, I'm still frustrated and uncertain. While many people are rightly worried about someone secretly recording them with smart glasses, there's also another wrinkle: When are these glasses potentially sharing what you've been recording with others? Here's a short answer: Do Meta's glasses have third-party contractors potentially looking over your data? Yes, sometimes -- if you're using AI services. If you're not using those AI services, then according to Meta, you should be OK. But even then, I don't know where that "AI services" wall gets clearly drawn. And that's one of my biggest concerns. Meta has had a long history of problems with both privacy and trust, extending into the last decade and the Cambridge Analytica scandal. Those issues haven't come up with Meta's VR headsets, which don't have many data-collecting AI services, but the company's smart glasses do. And those services will keep growing and becoming more capable over the next few years. Meta's popular Ray-Ban glasses -- more than 7 million pairs were sold last year -- are the frontrunners in a whole wave of camera-enabled AI glasses and wearables coming from a number of companies, with Google entering the mix later this year. If you're interested in Meta's glasses, which, as a technical achievement, are the best-quality camera and audio-enabled smart glasses at the moment, you need to keep these concerns in mind. And as smart glasses pivot to always-on AI-enabled devices, we're only going to run into more questions about how comfortable you might feel leaning on their services -- and what all the cloud-based AI tech companies need to do to make these policies clearer. Below, I'm going to share Meta's responses at length so you can understand my reasoning -- and also make your own assessment about the risks. If you're using AI -- for instance, to analyze something you see or to get a translation -- then third-party contractors might be looking at what you're recording. This is what the company told me: "Ray-Ban Meta glasses help you use AI, hands-free, to answer questions about the world around you. Unless users choose to share media they've captured with Meta or others, that media stays on the user's device." But then there's this: "When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people's experience, as many other companies do. We take steps to filter this data to protect people's privacy and to help prevent identifying information from being reviewed." The assumption you can make from this is that any time you're using Meta's AI services, Meta may very well be using third-party contractors to review the information. While Meta promises that the information is properly filtered to remove sensitive data or details, that worrisome news report said contractors in Kenya were annotating footage taken from glasses that had sensitive images that were clearly visible. That has me especially concerned about what happens when people use Meta AI for assistive purposes: namely, as a way to "see" when you can't with your own eyes. Would looking at personal documents and reading them back be a risky thing to do? Since Meta hasn't properly introduced any sort of encrypted, private AI features on its glasses, it could be. Meta does say this about privacy protections: "We have strict policies and guardrails in place that intentionally limit what information contractors see." But again, I don't actually know what those strict policies or guardrails are. "We take steps to filter this data to protect people's privacy and to help prevent identifying information from being reviewed," Meta added. This doesn't help clarify any of the specifics. I'm going on trust here, which isn't ideal at all. I have to assume that anything done via cloud AI services, like Meta's using, could be seen to some degree by third-party contractors. And you should too. Meta's glasses don't use AI all the time, and neither do I. In fact, I'm mostly using Meta's glasses to record photos and video, listen to music, and make phone calls. I don't use the AI much, in part because Meta's AI has very little interaction with or control over my other personal data or even my iPhone. For non-AI photo and video recording, things should be safe... I think. I asked members of the comms team whether photo or video recordings that I made with the glasses, and that weren't involved in AI-based invocations, could be subject to third-party contractor viewing. They said this: "To be clear, the photos and videos that users take with their AI glasses that are simply stored on their phone's camera roll are not used by Meta to develop and improve AI. If you just record a video or take a photo using the glasses' camera button, that media stays on your phone. Unless you choose to share media you've captured with Meta or others, that media stays on your device." That sounded promising. But with Meta's glasses settings, storage becomes a little cloudy... literally. In the Meta AI app Glasses Privacy settings, a Cloud Media toggle claims to "allow your photos and videos to be sent to Meta's cloud for processing and temporary storage." Would cloud media mean my personal photos and videos were open to possible third-party contractor annotation? According to Meta, no. According to Meta, any commands using AI to send photos or using Autocapture modes that get enabled by toggling on Cloud Media will be safe too. In the company's words: "Certain features, like sharing from your glasses using your voice ('Hey Meta, send a photo'), seamless auto-importing of media, or Autocapture, where the camera automatically takes photos or videos when you start the feature (useful for moments where you may want to capture content without manually triggering the camera via the button or voice), may require sending your photos and videos to Meta's cloud for processing and temporary storage. If you enroll in cloud media services, the photos and videos sent from the frames or auto-imported to your phone are not subject to human annotation. Enabling cloud media services is opt-in and not on by default." Meta doesn't clearly define what exactly "Cloud Media" is, other than a temporary storage spot for your photos and videos so they can be processed with voice commands. And what worries me is how a wall gets drawn around "private" versus "AI-connected" media. It makes me want to toggle Cloud Media off, which would mean the photos and videos are stored just in my phone's photo library. I still like the camera and audio features of smart glasses and am intrigued by the AI features coming. But I'm also very concerned by the uncertainty about where the line is drawn between what gets annotated by a third party, potentially, and what stays private. Meta's using those third parties to help train AI, or to possibly moderate content. It's a reminder of how cloud-based and out of our control so many AI services are. I get even more worried thinking about reports of Meta wanting to add facial recognition and more to its smart glasses. Meanwhile, more AI glasses are coming, and wearable camera-equipped AI devices, too. Google is up next. And all of these companies need to make it much clearer how they're using the data from these devices, how they're protecting our privacy issues, and how we users can manage it -- if at all. It's not easy at all to understand how Meta's glasses handle AI data, or where it's being sent. I'm hoping this story helps you better understand where the lines might be. Even so, I have to admit I feel a lot less likely to use Meta's glasses for anything personal or data-sensitive. Vacation glasses? A tool for quick social footage for work, I'm broadcasting anyway? Experiments with AI? I think so. But if Meta's aiming to be a deeply assistive tool for us via AI wearables, and doesn't want everyone calling them "pervert glasses," which people already are, it needs to do better, fast.
[2]
Meta Has Smart Glasses Spiraling Towards Glasshole 2.0
Maybe using people's naked videos to train AI wasn't such a good idea after all. If there was one surprise hit last year in the consumer tech world, it was smart glasses, and Meta was one of the biggest winners. Meta, with the help of EssilorLuxottica, managed to sell 7 million units of its Ray-Ban-branded AI glasses, about 6 million more than it sold the year priorΓ’β¬"a smashing success by all metrics. A smashing success that Mark Zuckerberg and company appear determined to follow up on by utterly fumbling the bag. If you've been paying attention to the news recently, you may have noticed a little story about how Meta's Ray-Ban smart glasses have been sending recorded footage to a third party, where those videos were then reviewed by human eyes. As it turns out, that footage contained some stuff that most people would probably have rather kept private, including videos of people watching porn, using the bathroom, and credit card and bank information. Meta's right to do this is, of course, buried in its terms of service that most people (myself included) often blindly agree to. But there's a big problem with that part too: some of the videos sent to human reviewers (a contractor called Sama) seem to have been recorded accidentally, meaning even if you did actually read Meta's ToS, you might not be able to avoid having some of your most private moments grace the eyeballs of a stranger. By most people's metrics, that's um... bad. And the worst part is, it's not just bad for the people who own the smart glasses or the people who encounter them unknowingly; it's bad for Meta. Smart glasses, as many of us in the millennial+ age demographic know, have a history encapsulated by one very iconic pejorative: "glasshole." When Google released its now-infamous pair of smart glasses, Google Glass, all the way back in 2013, things did not go as planned. The rise and fall was rapid, and the entire form factor was almost categorically rejected by consumers who felt wearing a discreet camera on your face was an incursion on everyone's privacy. Bars and restaurants banned the device, critics dubbed anyone who wore a pair a "glasshole," and while the whole experiment wasn't officially put to rest until 2023, Google Glass was pulled from the market in 2015, just two years after its release. The short version is: Google Glass was a disaster, and it made the category of smart glasses almost radioactive for fear of backlash over privacy. Fast forward to today, and things have changed a bit. Smart glasses, which were once immediately dismissed as a privacy nightmare, have actually proven marketable to some. A part of that is that Meta managed to make a pair that doesn't look out of place on your head, and the other part is that our expectation of digital privacy has eroded over the past decade due to, I don't know, a lotΓ of sh*t. Either way, Meta had a chance to reset expectations of smart glasses and do things differently. It was never going to solve the privacy issues that are inherent with wearing a discreet camera on your face (issues that I've already unpacked at length on Gizmodo many times), but it could have at least attempted not to amplify them by using your nude videos to train AI. Instead, however, it's careening toward the same fate as Google Glass, and the pushback is palpable. Just this week, the Electronic Frontier Foundation (EFF) released a statement regarding smart glasses, essentially warning anyone with even the tiniest respect for digital privacy not to buy a pair. And it's not just advocacy groups; there's also an ongoing class action lawsuit against Meta claiming the company misleads its customers with deceptive advertising, giving them the expectation of privacy to some degree. That's not even counting the outright bans that have been brewing in the background, including one by a popular cruise liner and one by the College Board, which categorizes smart glasses (rightfully, by the way) as a cheating tool. If backlash against the category hasn't reached a boiling point, it's certainly trending in that direction, and Meta, for its part, hasn't even acknowledged the concerns, let alone made any attempt to address them in a meaningful way. On one hand, it's not surprising. Meta is a company that made its mark by usurping user data, oftentimes to the detriment of people who made its services valuable in the first place. On the other hand, though, it feels somehow even more disrespectful than usual. I guess Meta is betting that its smart glasses' reputation being a hazard to digital privacy will blow over, and people will go about their business using its products as usualΓ’β¬"it worked largely with Facebook and Instagram; why would smart glasses be any different? But Ray-Bans aren't social media, and the fact is that (as someone who's used quite a few pairs of smart glasses), they are still something that very few people even own and even fewer people feel like they need. In a consumer sense, smart glasses are vulnerable and easy to rule out. If people decided tomorrow that they didn't want to buy a pair made by Meta or any other brand, the choice would be simple. And the richest part is this: if Meta's gadget does get torpedoed, it'll be by a missile designed and built by the company itself and autographed personally by Mark Zuckerberg.
Share
Share
Copy Link
Meta's Ray-Ban smart glasses are under fire after revelations that third-party contractors review user footage containing sensitive content. Despite selling 7 million units last year, the company faces a class-action lawsuit and warnings from privacy advocates. The controversy echoes the fate of Google Glass and raises urgent questions about AI privacy policies in wearables.
Meta sold over 7 million pairs of its Ray-Ban smart glasses last year, marking a stunning turnaround from the previous year's 1 million units and establishing the AI-enabled wearables as a surprise consumer hit
2
. The collaboration with EssilorLuxottica delivered what many consider the best-quality camera and audio-enabled smart glasses currently available1
. Yet this commercial triumph now faces a growing backlash as revelations about how Meta handles user data for AI training threaten to undermine consumer trust in the entire product category.
Source: Gizmodo
The core controversy centers on Meta's use of third-party contractors to review footage captured by smart glasses users. According to Meta's own explanation, when people share content with Meta AI services, the company "sometimes use contractors to review this data for the purpose of improving people's experience"
1
. This practice has resulted in contractor viewing of highly sensitive material. Reports indicate that third-party human reviewers at Sama, a contracting firm in Kenya, accessed footage containing people watching pornography, using bathrooms, and displaying credit card and bank information2
.What makes this particularly troubling is that some videos appear to have been recorded accidentally, meaning users may have inadvertently shared private moments even if they understood Meta's terms of service
2
. Meta states it takes "steps to filter this data to protect people's privacy and to help prevent identifying information from being reviewed," but the company has not clarified what specific guardrails exist1
.The ambiguity around when cloud AI services trigger human annotation has left even tech journalists confused. Meta's position is that unless users "choose to share media they've captured with Meta or others, that media stays on the user's device"
1
. However, the boundary between local processing and cloud-based AI services remains poorly defined. Any interaction with Meta AIβwhether asking questions about surroundings, requesting translations, or using assistive features to read documentsβcould potentially expose footage to third-party contractors1
.
Source: CNET
This lack of clarity poses particular risks for people who might use the glasses for accessibility purposes, relying on AI to "see" personal documents or navigate private spaces. Meta has not introduced encrypted, private AI features that would protect such sensitive use cases
1
.The privacy concerns have spawned legal and institutional responses. A class-action lawsuit alleges Meta engages in deceptive advertising by misleading customers about their reasonable expectations of data protection
2
. Meanwhile, the Electronic Frontier Foundation issued a stark warning this week, essentially advising anyone concerned about digital privacy to avoid purchasing smart glasses entirely2
.Bans are also proliferating. A popular cruise liner and the College Board have prohibited the devices, with the latter categorizing them as cheating tools
2
. These restrictions echo the fate of Google Glass, which was banned from bars and restaurants before being pulled from the market in 2015, just two years after launch2
.Related Stories
The current controversy resurrects memories of Google Glass's spectacular failure. When Google released its smart glasses in 2013, the product was almost immediately rejected by consumers who viewed the discreet facial camera as a privacy incursion. Critics dubbed wearers "Glasshole," and the backlash made the entire category nearly radioactive for years
2
.Meta, led by Mark Zuckerberg, had an opportunity to reset expectations by addressing privacy concerns more transparently. Instead, the company appears to be careening toward a similar fate
2
. The difference is that Meta's long history with privacy scandals, including the Cambridge Analytica incident, means the company starts with less consumer trust than Google did a decade ago1
.As camera-enabled AI glasses from multiple manufacturersβincluding Google's upcoming entry later this yearβprepare to flood the market, Meta's handling of this crisis will likely shape the entire industry's trajectory
1
. The key question is whether Meta will acknowledge these concerns and implement meaningful changes, or whether it will bet that the controversy blows over as it did with Facebook and Instagram2
.The stakes are different this time. Unlike social media platforms that became entrenched in daily life, smart glasses remain a niche product that few people feel they need. If privacy concerns continue escalating, the window for mainstream adoption could close before it fully opens
2
. For now, anyone considering Meta's Ray-Ban smart glasses should assume that anything processed through cloud AI services could potentially be reviewed by third-party contractorsβand decide whether that tradeoff is acceptable.Summarized by
Navi
13 Feb 2026β’Technology

03 Mar 2026β’Policy and Regulation

18 Sept 2025β’Technology

1
Technology

2
Technology

3
Business and Economy
