4 Sources
4 Sources
[1]
The Rise of the Ray-Ban Meta Creep
Joy Hui Lin, a book researcher living in Paris, was walking through the trendy Le Marais district last summer when two male university students chased her down to ask about her outfit. Lin wasn't surprised. It's common for Instagram accounts to do street photography in the area and she prides herself on her fashion -- that day, she was in "a nice sundress and a very big stylish hat," she tells WIRED. "It was all very cute until the end of the conversation, when one of them was like, 'So, these glasses have been recording this whole time.'" She clocked the device, a black-framed pair of Ray-Ban Meta smart glasses (commonly referred to as Meta Ray-Bans), which can record video from the user's point of view. Lin was taken aback at the young man not asking permission to film her -- especially as he was now inquiring whether he could share the video online. It felt like a "violation," Lin says. The man in the glasses, she adds, "didn't seem to understand that it could be very off-putting to record someone first without asking." This type of encounter is becoming more common, to judge by a proliferation of social media accounts in which content creators use smart glasses to record their public interactions for huge audiences. These conversations aren't always so innocent as an interview about personal style. Instagram Reels and TikTok are infested with footage of users pulling juvenile pranks on retail workers, for example. And many of the top influencers in the Meta Ray-Ban scene, including Sayed Kaghazi (@itspolokid) and Cameron John (@rizzzcam), who have more than 3 million Instagram followers combined, are men prowling sun-soaked beaches and corridors of city nightlife so they can showcase their attempts to pick up women. Their unsolicited, occasionally pestering flirtations in public spaces with these women have helped to inspire a contemptuous nickname for the Meta specs: "pervert glasses." (Neither Kaghazi nor John returned a request for comment.) Like their forerunner, the doomed Google Glass, the Meta Ray-Ban (and Oakley) glasses, which range in price from $299 to $499, are up against a privacy-minded backlash. But the picture today is complicated by a few factors. For a start, Meta's glasses automatically send footage to the company, which has overseas contract workers review it, as an investigation by Swedish newspapers found. The videos described in that February report included sensitive content people may not have realized they were recording and uploading, such as nudity, sex, and bathroom activities. This has already prompted an ongoing consumer protection lawsuit. On top of that, Meta's glasses are equipped with potentially invasive AI services -- already, the Meta app that runs on the device can collect your videos for further AI training -- that it plans to continue expanding. They are far more popular than any other smart glasses to date, with Meta selling 8 million pairs in 2025 alone, and are rather inconspicuous compared to their nakedly futuristic predecessors. Lin told the man who filmed her that she didn't want the footage to appear on his Instagram account, and later confirmed he hadn't uploaded it. But the "unsettling" experience, she says, led her to reflect on how most people don't necessarily recognize that a stranger talking to them face-to-face could be quietly capturing their likeness. It has made her a little warier of anyone in glasses who approaches her on the street. Lin is hopeful, however, that more nations will begin to follow Denmark's lead as it pioneers individual copyright protections over one's own likeness, a move that guards against unwanted AI deepfakes and possibly invasive recording, including with smart glasses.
[2]
Meta Explained Its Smart Glasses AI Privacy Policies to Me. I'm Still Worried
Nearly 20 years writing about tech, and over a decade reviewing wearable tech, VR, and AR products and apps I wear Meta's Ray-Bans off and on when I travel to snap photos, take phone calls and listen to music. The technology is fascinating, fun and convenient. I also knew that Meta's privacy policies might be a concern, but now I'm more worried about it than ever before. My concerns ramped up after a number of friends and colleagues shared a report about Meta's third-party contractors in Kenya being able to view sensitive information like photos of banking records, nudity and sexual encounters that had been recorded on Meta glasses (which has resulted in a class action lawsuit). What boundaries had Meta set up to protect people's privacy? I pored over Meta's terms of service online and in the Meta AI app, but that was no help. I wanted some answers. So I contacted Meta's comms team to get clarity. But even after getting the official answer from Meta about where the lines are drawn, I'm still frustrated and uncertain. While many people are rightly worried about someone secretly recording them with smart glasses, there's also another wrinkle: When are these glasses potentially sharing what you've been recording with others? Here's a short answer: Do Meta's glasses have third-party contractors potentially looking over your data? Yes, sometimes -- if you're using AI services. If you're not using those AI services, then according to Meta, you should be OK. But even then, I don't know where that "AI services" wall gets clearly drawn. And that's one of my biggest concerns. Meta has had a long history of problems with both privacy and trust, extending into the last decade and the Cambridge Analytica scandal. Those issues haven't come up with Meta's VR headsets, which don't have many data-collecting AI services, but the company's smart glasses do. And those services will keep growing and becoming more capable over the next few years. Meta's popular Ray-Ban glasses -- more than 7 million pairs were sold last year -- are the frontrunners in a whole wave of camera-enabled AI glasses and wearables coming from a number of companies, with Google entering the mix later this year. If you're interested in Meta's glasses, which, as a technical achievement, are the best-quality camera and audio-enabled smart glasses at the moment, you need to keep these concerns in mind. And as smart glasses pivot to always-on AI-enabled devices, we're only going to run into more questions about how comfortable you might feel leaning on their services -- and what all the cloud-based AI tech companies need to do to make these policies clearer. Below, I'm going to share Meta's responses at length so you can understand my reasoning -- and also make your own assessment about the risks. If you're using AI -- for instance, to analyze something you see or to get a translation -- then third-party contractors might be looking at what you're recording. This is what the company told me: "Ray-Ban Meta glasses help you use AI, hands-free, to answer questions about the world around you. Unless users choose to share media they've captured with Meta or others, that media stays on the user's device." But then there's this: "When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people's experience, as many other companies do. We take steps to filter this data to protect people's privacy and to help prevent identifying information from being reviewed." The assumption you can make from this is that any time you're using Meta's AI services, Meta may very well be using third-party contractors to review the information. While Meta promises that the information is properly filtered to remove sensitive data or details, that worrisome news report said contractors in Kenya were annotating footage taken from glasses that had sensitive images that were clearly visible. That has me especially concerned about what happens when people use Meta AI for assistive purposes: namely, as a way to "see" when you can't with your own eyes. Would looking at personal documents and reading them back be a risky thing to do? Since Meta hasn't properly introduced any sort of encrypted, private AI features on its glasses, it could be. Meta does say this about privacy protections: "We have strict policies and guardrails in place that intentionally limit what information contractors see." But again, I don't actually know what those strict policies or guardrails are. "We take steps to filter this data to protect people's privacy and to help prevent identifying information from being reviewed," Meta added. This doesn't help clarify any of the specifics. I'm going on trust here, which isn't ideal at all. I have to assume that anything done via cloud AI services, like Meta's using, could be seen to some degree by third-party contractors. And you should too. Meta's glasses don't use AI all the time, and neither do I. In fact, I'm mostly using Meta's glasses to record photos and video, listen to music, and make phone calls. I don't use the AI much, in part because Meta's AI has very little interaction with or control over my other personal data or even my iPhone. For non-AI photo and video recording, things should be safe... I think. I asked members of the comms team whether photo or video recordings that I made with the glasses, and that weren't involved in AI-based invocations, could be subject to third-party contractor viewing. They said this: "To be clear, the photos and videos that users take with their AI glasses that are simply stored on their phone's camera roll are not used by Meta to develop and improve AI. If you just record a video or take a photo using the glasses' camera button, that media stays on your phone. Unless you choose to share media you've captured with Meta or others, that media stays on your device." That sounded promising. But with Meta's glasses settings, storage becomes a little cloudy... literally. In the Meta AI app Glasses Privacy settings, a Cloud Media toggle claims to "allow your photos and videos to be sent to Meta's cloud for processing and temporary storage." Would cloud media mean my personal photos and videos were open to possible third-party contractor annotation? According to Meta, no. According to Meta, any commands using AI to send photos or using Autocapture modes that get enabled by toggling on Cloud Media will be safe too. In the company's words: "Certain features, like sharing from your glasses using your voice ('Hey Meta, send a photo'), seamless auto-importing of media, or Autocapture, where the camera automatically takes photos or videos when you start the feature (useful for moments where you may want to capture content without manually triggering the camera via the button or voice), may require sending your photos and videos to Meta's cloud for processing and temporary storage. If you enroll in cloud media services, the photos and videos sent from the frames or auto-imported to your phone are not subject to human annotation. Enabling cloud media services is opt-in and not on by default." Meta doesn't clearly define what exactly "Cloud Media" is, other than a temporary storage spot for your photos and videos so they can be processed with voice commands. And what worries me is how a wall gets drawn around "private" versus "AI-connected" media. It makes me want to toggle Cloud Media off, which would mean the photos and videos are stored just in my phone's photo library. I still like the camera and audio features of smart glasses and am intrigued by the AI features coming. But I'm also very concerned by the uncertainty about where the line is drawn between what gets annotated by a third party, potentially, and what stays private. Meta's using those third parties to help train AI, or to possibly moderate content. It's a reminder of how cloud-based and out of our control so many AI services are. I get even more worried thinking about reports of Meta wanting to add facial recognition and more to its smart glasses. Meanwhile, more AI glasses are coming, and wearable camera-equipped AI devices, too. Google is up next. And all of these companies need to make it much clearer how they're using the data from these devices, how they're protecting our privacy issues, and how we users can manage it -- if at all. It's not easy at all to understand how Meta's glasses handle AI data, or where it's being sent. I'm hoping this story helps you better understand where the lines might be. Even so, I have to admit I feel a lot less likely to use Meta's glasses for anything personal or data-sensitive. Vacation glasses? A tool for quick social footage for work, I'm broadcasting anyway? Experiments with AI? I think so. But if Meta's aiming to be a deeply assistive tool for us via AI wearables, and doesn't want everyone calling them "pervert glasses," which people already are, it needs to do better, fast.
[3]
Concerns Over Meta's Smart Glasses Have Reached the U.S. Senate
Turns out you're not the only one who thinks adding facial recognition to smart glasses is a bad idea. Consternation about smart glasses is ramping up, and it looks like those fears are officially hitting the national stage. This week, U.S. Sens. Ron Wyden and Jeff Merkley (both D-Ore) officially inquired about Meta's plans to add facial recognition to its Ray-Ban smart glasses, painting the idea as an existential threat to privacy. “Despite Meta’s desire to minimize public attention on this product, the deployment of smart glasses equipped with facial recognition technology threatens Americans’ privacy rights and civil liberties, and therefore warrants close scrutiny,†the senators wrote in a letter to Meta CEO Mark Zuckerberg. “The widespread deployment of facial-recognition-enabled smart glasses also risks accelerating the normalization of mass surveillance in the United States.†As Scooby-Doo would say: ru-roh! In case you missed it, the New York Times reported in February that, according to memos seen by the publication, Mark Zuckerberg and company are working on plans to introduce facial recognition to the company's Ray-Ban smart glasses. Not only that, but Meta is reportedly planning to do so duringâ€"and this is apparently the company's own wordsâ€""a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns." Feel free to go rinse your eyeballs if they're feeling a little unclean; I'll wait. While plans for facial recognition in its smart glasses have not been acknowledged by Meta, let alone made official, the Democratic senators, to their credit, appear to be getting out ahead of any potential privacy bombshellsâ€"and for good reason. Meta already has a checkered history with facial recognition. In 2021, the company shut down a tool that scanned the face of every single person on Facebook, deleting more than a billion face templates. What's worse is that two years before nixing that tool, Meta agreed, as part of a $5 billion settlement with the Federal Trade Commission, to obtain "affirmative express consent" from users before using facial recognition to scan their faces. So, just to lay this all out plainly: the company that has already been reprimanded and regulated for its use of facial recognition on its platform is now (reportedly) considering adding facial recognition to hardware that is arguably even more problematic than the previous application. Welcome to 2026. As the senators lay out in their letter, the potential risks of adding facial recognition to smart glasses are numerous, and because of the implications on privacy, they have a few questions, including the following: will people be able to request deletion of their biometric data? Will the data collected be used to train AI? Has Meta actually thought about the privacy implications? Will Meta be making a database of people's faces? Has the company ever heard of civil liberties? Would it share data with law enforcement? Lastly, what the actual f*ck are you thinking, Mark? Okay, that last one is made up, but it's pretty much implied by all the other very real questions. There was never really a good time for Meta to be considering something as ethically bankrupt as adding facial recognition to smart glassesâ€"a device that already has a ton of inherent privacy implicationsâ€"but now is a particularly bad one. Last month, a report revealed that Meta has been sending sensitive videos captured by its smart glasses to human reviewers tasked with helping to train AI models. Those videos, according to the subcontractors, show people naked, going to the bathroom, and having sex, and some of them were recorded unintentionally. Partially as a result of that icky reality, smart glasses (Ray-Ban Meta AI glasses in particular) are now very much on the radar of state legislatures, privacy watchdogs, and pretty much anyone else who doesn't like the idea of being recorded discreetly. Clearly, though, Meta could still make things worse, and while it's done a bang-up job of pretty much ignoring all the rotten revelations as of late, the U.S. Senate might not blow over as easily. Hard to say, but it looks like we might get another court appearance from Mark Zuckerberg, and maybe this time he won't outfit his entourage in smart glasses.
[4]
Meta Has Smart Glasses Spiraling Towards Glasshole 2.0
Maybe using people's naked videos to train AI wasn't such a good idea after all. If there was one surprise hit last year in the consumer tech world, it was smart glasses, and Meta was one of the biggest winners. Meta, with the help of EssilorLuxottica, managed to sell 7 million units of its Ray-Ban-branded AI glasses, about 6 million more than it sold the year priorâ€"a smashing success by all metrics. A smashing success that Mark Zuckerberg and company appear determined to follow up on by utterly fumbling the bag. If you've been paying attention to the news recently, you may have noticed a little story about how Meta's Ray-Ban smart glasses have been sending recorded footage to a third party, where those videos were then reviewed by human eyes. As it turns out, that footage contained some stuff that most people would probably have rather kept private, including videos of people watching porn, using the bathroom, and credit card and bank information. Meta's right to do this is, of course, buried in its terms of service that most people (myself included) often blindly agree to. But there's a big problem with that part too: some of the videos sent to human reviewers (a contractor called Sama) seem to have been recorded accidentally, meaning even if you did actually read Meta's ToS, you might not be able to avoid having some of your most private moments grace the eyeballs of a stranger. By most people's metrics, that's um... bad. And the worst part is, it's not just bad for the people who own the smart glasses or the people who encounter them unknowingly; it's bad for Meta. Smart glasses, as many of us in the millennial+ age demographic know, have a history encapsulated by one very iconic pejorative: "glasshole." When Google released its now-infamous pair of smart glasses, Google Glass, all the way back in 2013, things did not go as planned. The rise and fall was rapid, and the entire form factor was almost categorically rejected by consumers who felt wearing a discreet camera on your face was an incursion on everyone's privacy. Bars and restaurants banned the device, critics dubbed anyone who wore a pair a "glasshole," and while the whole experiment wasn't officially put to rest until 2023, Google Glass was pulled from the market in 2015, just two years after its release. The short version is: Google Glass was a disaster, and it made the category of smart glasses almost radioactive for fear of backlash over privacy. Fast forward to today, and things have changed a bit. Smart glasses, which were once immediately dismissed as a privacy nightmare, have actually proven marketable to some. A part of that is that Meta managed to make a pair that doesn't look out of place on your head, and the other part is that our expectation of digital privacy has eroded over the past decade due to, I don't know, a lot of sh*t. Either way, Meta had a chance to reset expectations of smart glasses and do things differently. It was never going to solve the privacy issues that are inherent with wearing a discreet camera on your face (issues that I've already unpacked at length on Gizmodo many times), but it could have at least attempted not to amplify them by using your nude videos to train AI. Instead, however, it's careening toward the same fate as Google Glass, and the pushback is palpable. Just this week, the Electronic Frontier Foundation (EFF) released a statement regarding smart glasses, essentially warning anyone with even the tiniest respect for digital privacy not to buy a pair. And it's not just advocacy groups; there's also an ongoing class action lawsuit against Meta claiming the company misleads its customers with deceptive advertising, giving them the expectation of privacy to some degree. That's not even counting the outright bans that have been brewing in the background, including one by a popular cruise liner and one by the College Board, which categorizes smart glasses (rightfully, by the way) as a cheating tool. If backlash against the category hasn't reached a boiling point, it's certainly trending in that direction, and Meta, for its part, hasn't even acknowledged the concerns, let alone made any attempt to address them in a meaningful way. On one hand, it's not surprising. Meta is a company that made its mark by usurping user data, oftentimes to the detriment of people who made its services valuable in the first place. On the other hand, though, it feels somehow even more disrespectful than usual. I guess Meta is betting that its smart glasses' reputation being a hazard to digital privacy will blow over, and people will go about their business using its products as usualâ€"it worked largely with Facebook and Instagram; why would smart glasses be any different? But Ray-Bans aren't social media, and the fact is that (as someone who's used quite a few pairs of smart glasses), they are still something that very few people even own and even fewer people feel like they need. In a consumer sense, smart glasses are vulnerable and easy to rule out. If people decided tomorrow that they didn't want to buy a pair made by Meta or any other brand, the choice would be simple. And the richest part is this: if Meta's gadget does get torpedoed, it'll be by a missile designed and built by the company itself and autographed personally by Mark Zuckerberg.
Share
Share
Copy Link
Meta's Ray-Ban smart glasses are under fire as reports reveal third-party contractors view sensitive user footage for AI training purposes. The controversy has attracted U.S. Senate attention, sparked a class-action lawsuit, and raised fears of mass surveillance as the company reportedly plans to add facial recognition technology.
Meta's Ray-Ban smart glasses, which sold 7 million units in 2025 alone, are facing mounting privacy concerns after revelations that third-party human reviewers access sensitive footage captured by the devices
2
. The controversy centers on how Meta handles user data, particularly when AI-enabled wearables record moments users may not have intended to share. Joy Hui Lin, a Paris-based book researcher, experienced this firsthand when university students using Ray-Ban smart glasses recorded her without permission during a street fashion interview, leaving her feeling violated when they revealed the recording only after their conversation1
.
Source: Gizmodo
Reports from Swedish newspapers uncovered that Meta automatically sends footage to overseas contract workers who review it for AI training purposes
1
. These videos included sensitive content such as nudity, sexual encounters, bathroom activities, and banking records—some recorded accidentally4
. Meta confirmed that when users engage with Meta AI services, third-party contractors may review shared content to improve the experience, though the company claims it filters data to protect privacy2
. However, the specifics of these safeguards remain unclear, leaving users uncertain about data protection.The privacy backlash has reached the U.S. Senate, where Ron Wyden and Jeff Merkley sent a letter to Meta CEO Mark Zuckerberg expressing concerns about plans to add facial recognition technology to the glasses
3
. The senators warned that "the widespread deployment of facial-recognition-enabled smart glasses also risks accelerating the normalization of mass surveillance in the United States"3
. According to New York Times reports, Meta is planning to introduce facial recognition during what the company described as "a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns"3
.
Source: CNET
The Electronic Frontier Foundation released a statement warning consumers with any respect for digital privacy not to purchase the glasses
4
. This echoes concerns about civil liberties and consumer protection that have plagued wearable technology since Google Glass earned users the derogatory nickname "Glasshole" before being pulled from the market in 20154
. Meta's own history with facial recognition includes a 2021 shutdown of a Facebook tool that scanned faces, deleting over a billion face templates, and a $5 billion Federal Trade Commission settlement requiring affirmative consent before using facial recognition3
.A class-action lawsuit is underway against Meta, claiming the company misleads customers with deceptive advertising that creates false expectations of privacy
4
. The legal action stems from reports that Kenyan contractors working for Sama could view sensitive information captured on Meta glasses2
. When users engage cloud services through Meta AI—such as requesting translations or analyzing scenes—their footage may be reviewed by contractors, though Meta has not clarified the specific guardrails protecting user data2
.Social media has amplified concerns as influencers like Sayed Kaghazi and Cameron John, who have over 3 million combined Instagram followers, use the glasses to record public interactions, earning the devices the nickname "pervert glasses"
1
. Institutions are responding with bans: a popular cruise liner and the College Board have prohibited the glasses, categorizing them as potential cheating tools4
. Denmark is pioneering individual copyright protections over personal likeness to guard against unwanted deepfakes and invasive recording1
.
Source: Gizmodo
Related Stories
The controversy highlights broader questions about how tech companies handle user data from AI-enabled devices. Meta's glasses, priced between $299 and $499, represent the frontrunners in a wave of camera-enabled AI wearables, with Google entering the market later this year
2
. The company's approach to human annotation for AI training raises concerns about transparency in cloud services and whether encrypted, private AI features should be standard. Senators Wyden and Merkley have asked Meta whether users can request deletion of biometric data, if collected data will train AI models, and whether the company plans to create a facial database or share information with law enforcement3
. Meta has not acknowledged these concerns or made meaningful attempts to address the privacy backlash, betting that public outcry will subside as it did with Facebook and Instagram4
.Summarized by
Navi
[1]
03 Mar 2026•Policy and Regulation

13 Feb 2026•Technology

03 Oct 2024•Technology

1
Technology

2
Technology

3
Science and Research
