24 Sources
24 Sources
[1]
Workers report watching Ray-Ban Meta-shot footage of people using the bathroom
Meta's approach to user privacy is under renewed scrutiny following a Swedish report that employees of a Meta subcontractor have watched footage captured by Ray-Ban Meta smart glasses showing sensitive user content. The workers reportedly work for Kenya-headquartered Sama and provide data annotation for Ray-Ban Metas. The February report, a collaboration from Swedish newspapers Svenska Dagbladet, Göteborgs-Posten, and Kenya-based freelance journalist Naipanoi Lepapa, is, per a machine translation, based on interviews with over 30 employees at various levels of Sama, including several people who work with video, image, and speech annotation for Meta's AI systems. Some of the people interviewed have worked on projects other than Meta's smart glasses. The report's authors said they did not gain access to the materials that Sama workers handle or the area where workers perform data annotation. The report is also based on interviews with former US Meta employees who have reportedly witnessed live data annotation for several Meta projects. The report pointed to, per the translation, a "stream of privacy-sensitive data that is fed straight into the tech giant's systems," and that makes Sama workers uncomfortable. The authors said that several people interviewed for the report said they have seen footage shot with Ray-Ban Meta smart glasses that shows people having sex and using the bathroom. "I saw a video where a man puts the glasses on the bedside table and leaves the room. Shortly afterwards, his wife comes in and changes her clothes," an anonymous Sama employee reportedly said, per the machine translation. Another anonymous employee said that they have seen users' partners come out of the bathroom naked. "You understand that it is someone's private life you are looking at, but at the same time you are just expected to carry out the work," an anonymous Sama employee reportedly said. Meta confirms use of data annotators In statements shared with the BBC on Wednesday, Meta confirmed that it "sometimes" shares content that users share with the Meta AI generative AI chatbot with contractors to review with "the purpose of improving people's experience, as many other companies do." "This data is first filtered to protect people's privacy," the statement said, pointing to, as an example, blurring out faces in images. Meta's privacy policy for wearables says that photos and videos taken with its smart glasses are sent to Meta "when you turn on cloud processing on your AI Glasses, interact with the Meta AI service on your AI Glasses, or upload your media to certain services provided by Meta (i.e., Facebook or Instagram). You can change your choices about cloud processing of your Media at any time in Settings." The policy also says that video and audio from livestreams recorded with Ray-Ban Metas are sent to Meta, as are text transcripts and voice recordings created by Meta's chatbot. "We use machine learning and trained reviewers to process this data to improve, troubleshoot, and train our products. We share that information with third-party vendors and service providers to improve our products. You can access and delete recordings and related transcripts in the Meta AI App," the policy says. Meta's broader privacy policy for the Meta AI chatbot adds: "In some cases, Meta will review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review may be automated or manual (human)." That policy also warns users against sharing "information that you don't want the AIs to use and retain, such as information about sensitive topics." "When information is shared with AIs, the AIs will sometimes retain and use that information," the Meta AI privacy policy says. Notably, in August, Meta made "Meta AI with camera" on by default until a user turns off support for the "Hey Meta" voice command, per an email sent to users at the time. Meta spokesperson Albert Aydin told The Verge at the time that "photos and videos captured on Ray-Ban Meta are on your phone's camera roll and not used by Meta for training." However, some Ray-Ban Meta users may not have read or understood the numerous privacy policies associated with Meta's smart glasses. Sama employees suggested that Ray-Ban Meta owners may be unaware that the devices are sometimes recording. Employees reportedly pointed to users recording their bank card or porn that they're watching, seemingly inadvertently. Meta's smart glasses flash a red light when they are recording video or taking a photo, but there has been criticism that people may not notice the light or misinterpret its meaning. "We see everything, from living rooms to naked bodies. Meta has that type of content in its databases. People can record themselves in the wrong way and not even know what they are recording," an anonymous employee was quoted as saying. When reached for comment by Ars Technica, a Sama representative shared a statement saying that Sama doesn't "comment on specific client relationships or projects" but is GDPR and CCPA-compliant and uses "rigorously audited policies and procedures designed to protect all customer information, including personally identifiable information." Saama's statement added: This work is conducted in secure, access-controlled facilities. Personal devices are not permitted on production floors, and all team members undergo background checks and receive ongoing training in data protection, confidentiality, and responsible AI practices. Our teams receive living wages and full benefits, and have access to comprehensive wellness resources and on-site support. Meta sued The Swedish report has reignited concerns about the privacy of Meta's smart glasses, including from the Information Commissioner's Office, a UK data watchdog that has written to Meta about the report. The debate also comes as Meta is reportedly planning to add facial recognition to its Ray-Ban and Oakley-branded smart glasses "as soon as this year," per a February report from The New York Times citing anonymous people "involved with the plans." The claims have also led to a proposed class-action lawsuit [PDF] filed yesterday against Meta and Luxottica of America, a subsidiary of Ray-Ban parent company EssilorLuxottica. The lawsuit challenges Meta's slogan for the glasses, "designed for privacy, controlled by you," saying: No reasonable consumer would understand "designed for privacy, controlled by you" and similar promises like "built for your privacy" to mean that deeply personal footage from inside their homes would be viewed and catalogued by human workers overseas. Meta chose to make privacy the centerpiece of its pervasive marketing campaign while concealing the facts that reveal those promises to be false. The lawsuit alleges that Meta has broken state consumer protection laws and seeks damages, punitive penalties, and an injunction requiring Meta to change business practices "to prevent or mitigate the risk of the consumer deception and violations of law." Ars Technica reached out to Meta for comment but didn't hear back before publication. Meta has declined to comment on the lawsuit to other outlets.
[2]
Meta sued over AI smart glasses' privacy concerns, after workers reviewed nudity, sex, and other footage | TechCrunch
Meta is facing a new lawsuit over its AI smart glasses and their lack of privacy, after an investigation by Swedish newspapers found that workers at a Kenya-based subcontractor are reviewing footage from customers' glasses, which included sensitive content, like nudity, people having sex, and using the toilet. Meta claimed it was blurring faces in images, but sources disputed that this blurring consistently worked, reports noted. The news prompted the U.K. regulator, the Information Commissioner's Office, to investigate the matter. Now, the tech giant is facing a lawsuit in the United States, as well. In the newly filed complaint, plaintiffs Gina Bartone of New Jersey and Mateo Canu of California, represented by the public interest-focused Clarkson Law Firm, allege that Meta violated privacy laws and engaged in false advertising. The complaint alleges that the Meta AI smartglasses are advertised using promises like "designed for privacy, controlled by you," and "built for your privacy," which might not lead customers to assume their glasses' footage, including intimate moments, was being watched by overseas workers. The plaintiffs believed Meta's marketing and said they saw no disclaimer or information that contradicted the advertised privacy protections. The suit charges Meta and its glasses manufacturing partner Luxottica of America with conduct that violates consumer protection laws. Meta has not yet responded to TechCrunch's request for comment. Clarkson Law Firm, which over the years has filed other major lawsuits against tech giants, including Apple, Google, and OpenAI, points to the scale of the issues at hand. In 2025, over seven million people bought Meta's smartglasses, which means their footage is fed into a data pipeline for review, and they can't opt out. Meta told the BBC that when people share content with Meta AI, it uses contractors to review the information to improve people's experience with the glasses, which is explained in its privacy policy, and pointed to Supplemental Meta Platforms Terms of Service, without specifying where this was noted. The news outlet, however, found that a mention of human review could be found in Meta's U.K. AI terms of service. A version of that policy that applies to the U.S. states "In some cases, Meta will review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review may be automated or manual (human)." The complaint mainly points to how the glasses were marketed, showing examples of ads that touted the privacy benefits, describing their privacy settings, and "added layer of security." "You're in control of your data and content," one ad read, explaining that the smartglasses owners got to choose what content was shared with others. The rise of smart glasses and other "luxury surveillance" tech, like always-listening AI pendants, have prompted a broad backlash. One developer published an app capable of detecting when smart glasses are nearby.
[3]
Can Meta see your private life through its Ray-Ban smart glasses? What to know
Many Meta Ray-Ban users wear their smart glasses everywhere. They enjoy having AI at their beck and call, getting notifications without pulling out a phone, and recording whatever they see. However, they might not imagine someone else watching the videos captured by these smart glasses, including videos they've accidentally recorded; videos, for example, that contain private bank information or someone undressing. Meta contractors in Nairobi, Kenya, had access to sensitive and private videos that were recorded with Meta smart glasses around the world, according to a recent investigation. The videos included footage of people going to the bathroom, undressing, viewing sensitive financial information, and engaging in intimate moments. Some of the workers said they believed the majority of the sensitive videos were made when the wearers didn't know the smart glasses were recording. Also: I wore the world's first HDR10 XR glasses, and they turned me into Batman (literally) The videos were reportedly viewed by contractors from Sama, a company Meta hired for AI development, as part of their assigned work. Human reviewers watch Meta Smart Glasses videos and label the objects in them. The data is later used to train Meta's AI to recognize those objects. The investigation, conducted by Swedish newspapers Svenska Dagbladet (SvD) and Göteborgs-Posten (GP), concluded that the workers were unaware they'd see sensitive videos among the captured data. Like the defunct Google Glass, Meta Ray-Ban smart glasses are notorious for their privacy red flags. Unlike Google, Meta sold seven million units of its Ray-Ban smart glasses just last year, double the amount of the year before. The device's popularity continues to grow at a time when many people have adjusted to a world of ever-present devices. The entire category of AI smart glasses fuels the debate about normalizing constant surveillance in everyday life, from unintentional recording of bystanders to AI analyzing faces and surroundings. This poses a risk to privacy and personal security, especially in cases where people may be victims of abuse. Some private companies are already banning the use of smart glasses at work to prevent covert recording, and European lawmakers and agencies, such as the UK's Information Commissioner's Office (ICO), are questioning regulators and Meta about whether these glasses violate privacy legislation. Meta did not immediately respond to a request for comment. While smart glasses aren't inherently negative, it's important to keep in mind the very real possibility that people can use them, intentionally or accidentally, to record others without their consent. In a case from October, a man used Meta smart glasses to record interactions with women at the University of San Francisco, highlighting privacy concerns. Also: Apple's $599 MacBook Neo hands-on: This budget laptop makes me worried for Windows Recording videos with smart glasses is convenient, fun, and useful for content creators. However, do smart glasses belong in certain settings -- such as healthcare -- where stricter privacy concerns are in play? There's also the matter of wiretapping laws, especially in states where all parties must consent to audio recording. These open questions illustrate the challenge of regulating a burgeoning technology in real time. In addition, these videos may be viewed by others anywhere in the world, if Meta's contracting practices are any indication. The problem isn't only Meta -- but Meta is the biggest name in this product category. Using recordings to train AI systems isn't all that common, said Melissa Ruzzi, director of AI at AppOmni, an AI security company. She said companies typically disclose this to users in terms of service. "The problem is that users in general do not read the user privacy and data usage settings, and just click accept," Ruzzi told ZDNET. According to Meta's terms of service, the company reserves the right to share user data from Meta AI and wearable devices, such as the Meta Ray-Ban smart glasses, with moderators for review. Also: I'm a tech pro and an AI job scam almost fooled me - here's what gave it away "There are always risks regarding privacy, identity theft, and targeted phishing when data gets used because the AI may expose it again," Ruzzi added. "This is why it is so important to read and understand the terms and conditions before clicking accept when you start using AI."
[4]
Meta's AI glasses reportedly send sensitive footage to human reviewers in Kenya
Meta's AI-powered smart glasses could be sending sensitive footage to human reviewers in Nairobi, Kenya, according to an investigation by the Swedish outlets Svenska Dagbladet and Göteborgs-Posten. The report, which was published last week, claims Meta contractors in Kenya have seen videos captured with the smart glasses that show "bathroom visits, sex and other intimate moments." So far, at least one proposed class action lawsuit accusing Meta of violating false advertising and privacy laws has emerged in response to Svenska Dagbladet's reporting, citing the company's claim that its smart glasses are designed for privacy: By affirmatively claiming that the Glasses were designed to protect privacy, Meta assumed a duty to disclose material facts that would inform a reasonable consumer's decision to purchase the product. Instead, Meta hid the alarming reality: that use of the AI features results in a stranger halfway around the world watching the most private moments of a person's life. The Nairobi-based contractors interviewed by Svenska Dagbladet are AI annotators, meaning they label images, text, or audio, with the goal of helping AI systems make sense of the data they're training on. "We see everything -- from living rooms to naked bodies," one worker says, according to Svenska Dagbladet. "Meta has that type of content in its databases." A former Meta employee reportedly tells Svenska Dagbladet that faces in annotation data are blurred automatically, though workers in Kenya say this "does not always work as intended," and some faces are still visible. Another person reportedly tells the outlet that a wearer's bank cards are sometimes seen in the footage they review as well. Meta's Ray-Ban and Oakley smart glasses come with a built-in AI assistant capable of answering questions about what a user can see. The glasses have soared in popularity in recent years, despite growing concerns over privacy and surveillance. EssilorLuxottica, the eyewear giant that Meta works with to develop the camera-equipped glasses, sold over 7 million of the AI-powered glasses in 2025 -- more than tripling its sales in 2023 and 2024 combined. Last year, Meta made some changes to its privacy policy that keep Meta AI with camera use enabled on your glasses "unless you turn off 'Hey Meta.'" It also stopped allowing wearers to opt out of storing their voice recordings in the cloud. As reported by Svenska Dagbladet, the Kenya-based AI reviewers work with transcriptions as well, ensuring Meta AI provides the correct answer to the questions users ask aloud. In a statement to The Verge, Meta spokesperson Tracy Clayton says media captured by its smart glasses "stays on the user's device" unless they choose to share it with other people or Meta. "When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people's experience, as many other companies do," Clayton says. "We take steps to filter this data to protect people's privacy and to help prevent identifying information from being reviewed." The UK's Information Commissioner's Office has questioned Meta about the claims in Svenska Dagbladet's reporting. Privacy advocates have raised concerns about Meta's alleged goals to build facial recognition into its smart glasses as well, with the Electronic Privacy Information Center calling it a "grave risk to privacy, safety, and civil liberties."
[5]
Meta smart glasses face UK privacy probe
Contractors tasked with improving AI reportedly had access to intimate footage captured through wearables Britain's privacy watchdog is asking questions about Meta's AI-powered smart glasses after reports that human contractors reviewing recordings from the devices were exposed to extremely private moments captured by unsuspecting users. The Information Commissioner's Office (ICO) confirmed it is contacting Meta following an investigation by Swedish outlets Svenska Dagbladet and Göteborgs-Posten that claims outsourced workers tasked with improving Meta's AI systems routinely review footage showing everything from everyday conversations to far more intimate scenes. The fuss revolves around Meta's Ray-Ban smart glasses, ordinary-looking frames stuffed with cameras, microphones, and an AI assistant that can take photos, shoot video, and respond to voice commands. Meta's terms note that some interactions may be reviewed by humans to improve the system, but according to the Swedish investigation, that review queue can occasionally include moments wearers likely didn't expect strangers to watch. According to interviews with dozens of workers employed by a Meta subcontractor in Nairobi, Kenya, their job involves labeling and reviewing video, audio, and transcripts collected from the glasses so the company's AI models can better interpret real-world scenes and conversations. Some of the workers interviewed claim the review queue isn't just harmless AI prompts. Some clips show people getting dressed or using the toilet, while others capture private conversations about relationships, politics, or alleged wrongdoing. Others interviewed by the Swedish outlets claimed the clips occasionally include things like bank cards, personal paperwork, or other identifying details inadvertently caught on camera. As one employee put it: "We see everything." The investigation raises questions about cross-border data flows. Under the EU's GDPR, companies transferring personal data to contractors outside the bloc must ensure the information is protected through approved safeguards. This has, unsurprisingly, caught the attention of the UK's ICO. In a statement to the BBC, the watchdog said it was writing to Meta after the claims surfaced, describing the allegations as "concerning." The regulator added that organizations deploying products that capture personal data must be transparent about what information is collected, how it is used, and who may have access to it. "The claims in this article are concerning," the ICO said. "We will be writing to Meta to request information on how it is meeting its obligations under UK data protection law." Meta, for its part, told the Beeb that recordings are only used to improve its AI systems in certain circumstances, such as when users choose to share interactions to help train the technology. The company said users can manage their data through device settings and delete recordings at any time. Neither the ICO or Meta responded to The Register's questions. The report is yet another reminder that "AI-powered" often still means humans somewhere in the loop - sometimes watching more than users bargained for. ®
[6]
Meta's AI display glasses reportedly share intimate videos with human moderators
Users of Meta's AI smart glasses in Europe may be unknowingly sharing intimate video and sensitive financial information with moderators outside of the bloc, according to a report from Sweden's Svenska Dagbladet released last week. Employees in Kenya doing AI "annotation" told the journalists that they've seen people nude, using the toilet and engaging in sexual activity, along with credit card numbers and other sensitive information. With Meta's Ray-Ban Display and other glasses with AI capabilities, users can record what they're looking at or get answers to questions via a Meta AI assistant. If a wearer wants to make use of that AI, though, they must agree to Meta's terms of service that allow any data captured to be reviewed by humans. That's because Meta's large language models (LLMs) often require people to annotate visual data so that the AI can understand it and build its training models. This data can end up in places like Nairobi, Kenya, often moderated by underpaid workers. Such actions are subject to Europe's GDPR rules that require transparency about how personal data is processed, according to a data protection lawyer cited in the report. However, Svenska Dagbladet's reporters said they needed to jump through some hoops to see Meta's privacy policy for its wearable products. That policy states that either humans or automated systems may review sensitive data, and puts the onus on the user to not share sensitive information. Meta declined to comment directly on the story, and simply said that "when live AI is being used, we process that media according to the Meta AI Terms of Service and Privacy Policy." To find out more, check out Svenska Dagbladet's detailed reporting on the subject.
[7]
Meta's smart glasses raise privacy alarms as data labelers review intimate recordings
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. A hot potato: Meta's artificial intelligence ambitions are again drawing scrutiny - this time over what its smart glasses actually see, record, and send around the world for human review. Contractors in Nairobi say they've been paid to study hours of raw footage captured through Meta's "live AI" feature, analyzing everything from simple interactions to moments of startling intimacy. The work, known as data labeling, is how Meta trains its computer vision systems. Each frame reviewed helps improve the algorithms powering its augmented reality assistant. Behind that feedback loop, however, human labor fills in the gaps machines still can't bridge. Meta's AI glasses, made in partnership with Ray-Ban, constantly record short clips whenever "live AI" is activated. During these sessions, the device's camera and microphone remain continuously online so that the AI can analyze scenes and answer questions in real time. The data is then uploaded to Meta's systems, where it becomes part of a vast dataset used to refine future versions of the assistant. According to contractors employed by Sama, a Kenya-based firm specializing in annotation services, that data often includes far more personal material than users may realize. Workers said they've reviewed clips of people using bathrooms, getting dressed, and in some cases engaging in sexual activity - all recorded from the perspective of the glasses. Even when the content isn't overtly graphic, it can reveal sensitive personal details, such as debit cards displayed in full view, household interiors, or private conversations. Audio from some clips reportedly includes discussions about protests, criminal activity, or deeply personal aspects of people's lives, all of which become data points for Meta's algorithms. Meta's published terms make clear that interactions with the "live AI" assistant can be retained and reviewed by automated systems or by human reviewers. Users are also explicitly warned not to share sensitive information. In practice, though, contractors told Swedish newspapers Svenska Dagbladet and Göteborgs-Posten that many people wearing the glasses appeared unaware that their recordings could ever be seen by others. Sources said complaints about the nature of the footage or the annotation process were dismissed immediately. The Swedish reporters said Meta did not respond to their repeated questions for several weeks. When a spokesperson later replied, the company referred them only to its AI Terms of Service and privacy policy, emphasizing that "media is processed according to those documents whenever live AI is in use." Meta declined further comment when contacted by Straight Arrow News. Public concern around Meta's wearable technology has deepened in recent months. The company faced criticism earlier this year after The New York Times cited an internal memo describing plans to add facial-recognition capabilities to its glasses. Civil liberties groups have since warned that pairing facial recognition with persistent video capture could create mobile surveillance networks with minimal oversight. Developers outside Meta are already responding to the technology by building defensive tools. One recent example is a smartphone app designed to detect when someone nearby is wearing smart glasses. The app scans for visual or wireless cues associated with wearable recording devices and notifies users that they may be filmed. Meta notes that its glasses include a small LED indicator that lights up during recording. Privacy experts counter that the feature offers limited real-world protection, particularly after researchers demonstrated how easily the light can be disabled. Whether a warning buried on a terms-of-service page can constitute meaningful disclosure is now a central question for regulators and privacy advocates watching the smart glasses industry. For the people tasked with teaching Meta's AI what it sees, the answer already feels uncomfortably clear.
[8]
ICO writes to Meta over 'concerning' AI smart glasses report
The UK data watchdog is writing to Meta following a "concerning" report claiming outsourced workers were able to view sensitive content filmed by the company's AI smart glasses. Meta said subcontracted workers might sometimes review content, including films and images, captured by its AI smart glasses for the purpose of improving the "experience". Videos, including of glasses-wearers using the toilet or having sex, are sometimes reviewed by a Kenya-based Meta subcontractor, according to an investigation by Swedish newspapers Svenska Dagbladet (SvD) and Goteborgs-Posten (GP). "We see everything - from living rooms to naked bodies," one worker reportedly said. Meta said it took the protection of people's data very seriously, and was constantly refining its efforts and tools in that area. "Ray-Ban Meta glasses help you use AI, hands free, to answer questions about the world around you," the tech giant told BBC News. "When people share content with Meta AI, like other companies we sometimes use contractors to review this data to improve people's experience with the glasses, as stated in our Privacy Policy," it added. "This data is first filtered to protect people's privacy." According to Meta, filtering could include blurring faces in images - but sources who spoke to SvD and GP said sometimes this failed and peoples faces could be seen. Users have to activate recording manually or through a voice command, but may not realise their videos and images are sometimes reviewed by humans - as described within Meta's extensive privacy policies and terms of service. In response to a request from the BBC, Meta provided a link to its Supplemental Meta Platforms Terms of Service, but it did not identify which sections of those terms covered the review of content by human contractors. In Meta's UK AI terms of service the company says "In some cases Meta will review your interactions with AIs... and this review may be automated or manual (human)." But the UK's data watchdog, the Information Commissioner's Office (ICO), told BBC News "devices processing personal data, including smart glasses, should put users in control and provide for appropriate transparency". "Service providers must clearly explain what data is collected and how it is used," it said in a statement. "The claims in this article are concerning. We will be writing to Meta to request information on how it is meeting its obligations under UK data protection law." The workers the Swedish papers spoke to were data annotators, teaching Meta's AI to interpret images by manually labelling content. They were employed by a Nairobi-based outsourcing company called Sama, which has a history of work in data annotation. The BBC has approached Sama for comment on the report. The workers said they also reviewed transcripts of interactions with the AI to check it had answered questions adequately. They described privacy protections in their workplace, with cameras everywhere, and no mobile phones permitted. But the content they saw was often extremely sensitive, they said, including glasses-wearers watching pornography. In one instance, a worker told the newspapers, a man's glasses were left recording in a bedroom where they later filmed a woman, apparently the man's wife, undressing. Meta's glasses have a light in the corner of the frames that is turned on when the built-in camera is recording images or videos. The firm warns against misuse of the tech, advising users to show others when the recording light is on and avoid recording in private spaces. In September Meta unveiled a range of AI powered devices in partnership with glasses brands Ray-Ban and Oakley. BBC News has approached the glasses-makers' parent company, EssilorLuxottica, for comment. Rapid advancements in AI have resulted in a proliferation of wearable gadgets that use AI to interpret images and sounds captured by the device. Features can include translating text, or responding to questions about what the user is looking at - a particularly useful feature for those who are blind or partially sighted. However, as the devices have grown in popularity, so too have concerns about their misuse. Women have previously told the BBC they were filmed without their consent by users of smart glasses. Data annotator Sama began as a non-profit organisation, with the aim of increasing employment through the provision of tech jobs. It is designated as an "ethical" B-corp but a previous contract providing content moderation services to tech companies attracted criticism, alongside legal action by former employees. It has since stopped content moderation services and later said it regretting taking on this kind of work. Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.
[9]
Dear Meta Smart Glasses Wearers: You're Being Watched, Too
No one likes being recorded by Meta's Ray-Bans smart glasses, which have gotten increasingly popular in the last year or so. Now the wearers know how everyone else feels. According to a joint investigation published by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten, sensitive and personal footage captured by the devicesâ€"including people going to the bathroom, getting dressed, and having sexâ€"is being reviewed by contractors who see all of it uncensored. The investigation found that much of the footage captured by Meta's smart glasses, of which more than seven million pairs have reportedly been sold, is reviewed by contracted workers at a Kenya-based company called Sama. These workers are data annotators who are tasked with reviewing footage captured from the camera on the glasses and labeling it to help AI systems get better at identifying what they see. The process is tedious and labor-intensive, requiring workers to meticulously label everything on screen that can be identified. The firehose of footage meant to serve as valuable training data that is being delivered to these contractors apparently doesn't undergo much of a culling process before it lands at their stations, because, according to the investigation, a lot of private, personal, and at times intimate images are getting shared. Contractors reported being able to see things like a person's credit card when they go to complete a transaction at a store or text messages they send and receive when they look down at their phone. Those are things that one could reasonably assume might accidentally get caught on camera when a person forgets to turn off the record feature, but some contractors reported seeing a lot more of people than they ever expected. “In some videos you can see someone going to the toilet, or getting undressed,†one contractor for Sama told Svenska Dagbladet and Göteborgs-Posten. “I don’t think they know, because if they knew, they wouldn’t be recording.†Another contractor claimed that they reviewed footage where the wearer of the glasses set them down on a bedside table, only to have their wife walk into the room and undress, presumably unaware that she was being watched. Other footage reportedly showed the wearer watching porn or even recording themselves having sex (Odds are they knew they were recording in that instance, given smart glasses have really caught on in the world of adult content lately.) The wearers of these glasses probably don't want that footage seen by third parties. And the contractors sure seem like they'd rather not watch itâ€"though they risk losing their job if they decide not to label something. “You understand that it is someone’s private life you are looking at, but at the same time you are just expected to carry out the work,†an employee told the papers. “You are not supposed to question it. If you start asking questions, you are gone.†Futurism pointed out that Meta's terms of service for its AI products, which cover its smart glasses products, include a line that states the company can “review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review can be automated or manual (human).†It also notes content from its users can be reviewed "through automated or manual (i.e. human) review and through third-party vendors in some instances," to, among other things, "provide, maintain, and improve Meta services and features," and to "monitor your use of AIs for compliance with these Terms and applicable laws and to report violations of applicable laws or regulations as required by law." The only solution the company offers for users who would rather not have their trip to the dressing room reviewed by a set of eyes they never intended to send the footage to? "Do not share information that you don’t want the AIs to use and retain, such as information about sensitive topics." Basically, don't record it if you don't want a stranger to see it. That's far from an ideal solution, even for the wearer of the glasses, but it's not a solution at all for other people who are caught in the camera's view. The owner of the glasses can turn them off to avoid capturing something they don't want on camera. Everyone else just has to hope they're not being filmed by a stranger, only for that footage to get reviewed by other strangers. It's bad enough that we live in a surveillance state. It's made even worse by the fact that corporations are convincing people to pay for products to participate in advancing it. Gizmodo reached out to Meta for comment but did not receive a response at the time of publication.
[10]
Meta hit with class action suit for its AI glasses privacy debacle
Karandeep Singh Oberoi is a Durham College Journalism and Mass Media graduate who joined the Android Police team in April 2024, after serving as a full-time News Writer at Canadian publication MobileSyrup. Prior to joining Android Police, Oberoi worked on feature stories, reviews, evergreen articles, and focused on 'how-to' resources. Additionally, he informed readers about the latest deals and discounts with quick hit pieces and buyer's guides for all occasions. Oberoi lives in Toronto, Canada. When not working on a new story, he likes to hit the gym, play soccer (although he keeps calling it football for some reason🤔) and try out new restaurants in the Greater Toronto Area. A recent investigative report put Meta on blast for being a window into users' lives. The report, from Swedish newspapers Svenska Dagbladet (SvD) and Goteborgs-Posten (GP), indicates that human contractors in Kenya have been reviewing intimate footage captured by users' Meta glasses. Related Paranoid about smart glasses? There's an app for that Detect smart glasses before you even see them Posts 1 By Timi Cantisano The contractors, reportedly data annotators, are employed by the Nairobi-based outsourcing firm Sama, which is employed by Meta. Their task is to 'label' the glasses' audio and video content to help improve the AI experience. The task essentially trains Meta's AI systems, allowing them to recognize and interpret the world better. According to anonymous contractors that spoke to SvD and GP, as part of their duties, they'd also encounter "deeply private video clips, which appear to come straight out of Western homes, from people who use the glasses in their everyday lives." Several describe video material that contained "bathroom visits, sex and other intimate moments." The workers also described situations where they'd see people's bank cards, people watching NSFW material, or even being involved in situations that could cause "enormous scandals" if they were leaked. The revelation has caught the attention of regulatory bodies, and the UK's data protection watchdog, the Information Commissioner's Office (ICO), is the first to take action. Related Meta's privacy-invading feature for smartglasses confirms it doesn't care what you think Meta thinks we're too distracted to care about a facial recognition feature Posts 3 By Andy Boxall In a statement given to the BBC, the regulator said that any device processing personal data must prioritize user control and transparency. The ICO added that it would be writing to Meta to demand a formal explanation "on how it is meeting its obligations under UK data protection law." Elsewhere, in the US, the legal pressure has started mounting. Clarkson Law Firm, on behalf of plaintiffs in New Jersey and California, have filed a lawsuit against Meta, alleging that the company violated privacy laws and engaged in false advertising (via TechCrunch). The lawsuit formally targets the terms used by Meta when advertising the glasses. Promises like "designed for privacy, controlled by you," and "built for your privacy," according to the lawsuit, are misleading. Related No, the Meta Ray-Ban Display smart glasses aren't the future of wearable tech It's the wrong direction for smart glasses Posts 1 By Andy Boxall Meta, on the other hand, maintains that it only utilizes human review (contractors in Kenya) when users explicitly share media with Meta AI to ask questions. Subscribe to our newsletter for deeper tech privacy analysis Gain deeper context -- subscribe to the newsletter for in-depth coverage and nuanced analysis of tech privacy issues like the Meta glasses revelations, from data-annotation practices to regulatory and legal reactions, so you can follow the stakes. Get Updates By subscribing, you agree to receive newsletter and marketing emails, and accept our Terms of Use and Privacy Policy. You can unsubscribe anytime. "Ray-Ban Meta glasses help you use AI, hands-free, to answer questions about the world around you. Unless users choose to share media they've captured with Meta or others, that media stays on the user's device. When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people's experience, as many other companies do. We take steps to filter this data to protect people's privacy and to help prevent identifying information from being reviewed," said spokesperson Christopher Sgro in a statement given to TechCrunch. It remains to be seen whether Meta's defense will satisfy regulators and the courts.
[11]
Meta Ray-Bans send 'sensitive' videos to human data annotators
A new report says that video feeds from Meta Ray-Ban smart glasses are sent for review by human data annotators in Kenya, and that the footage includes sensitive content that is supposed to be excluded. Whistleblowers says that the video seen by third-party contractors used by Meta includes everything from people having sex to bank cards ... Meta Ray-Ban smart glasses can capture video in two ways. First, you can activate video recording manually in order to capture point of view footage. This can be a great hands-free way to record experiences like a roller coaster ride, as well as incidents that might occur while driving or cycling. Second, you can use the AI feature to ask questions about whatever you are looking at through the glasses. It's well understood that this AI processing is handled on Meta's servers and therefore that video footage needs to be sent to these for analysis. However, a report by Swedish site SVD says that footage is sent to human data annotators whose job it is to manually identify objects seen in these clips. A worker from a third-party contractor based in Kenya says that this footage sometimes includes very sensitive content. The workers in Kenya say that it feels uncomfortable to go to work. They tell us about deeply private video clips, which appear to come straight out of Western homes, from people who use the glasses in their everyday lives. Several describe video material showing bathroom visits, sex and other intimate moments [...] "Someone may have been walking around with the glasses, or happened to be wearing them, and then the person's partner was in the bathroom, or they had just come out naked", an employee says. The circumstances in which these sensitive videos are captured is very unclear from the report. For example, there is reference to people wearing the glasses while having sex, which would appear to be a very deliberate use. However, this would also seem to indicate that video footage is sent for review even when someone is manually recording rather than using Meta AI. There is definitely a lack of transparency about what footage is sent to Meta when using the AI function. For example, if you look at a car and ask Meta to identify the make and model, at what point does it cease sending footage? Is it five seconds later, 10 seconds, 30 seconds? Is it as soon as the question has been answered, or does it continue recording in case you ask further questions? The company's own terms of use are exceedingly vague. The terms state that "in some cases, Meta will review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review can be automated or manual (human)." SVP says that when it asked Meta for details, the company simply referred them back to the Terms of Service and Privacy Policy. Frustratingly, the report says that the site analyzed the network traffic to see what was being sent but then provides absolutely no insight. When we then analyse the network traffic from the app, we see that the phone has frequent contact with Meta servers in Luleå, Swden, and Denmark. Former Meta employees say that sensitive data isn't supposed to be sent for human review, but this relies on algorithmic identification of that sensitive data, which isn't always successful. I use the glasses myself. The novelty of the AI feature wore off rather quickly, but they're a very convenient way to shoot hands-free POV footage. Although I'd never use them to shoot anything sensitive, I would be pretty outraged to discover that Meta is capturing manual video recordings. The report is frustratingly lacking in hard information, but I guess it serves as a reminder to use any AI service with caution when it comes to sensitive data of any kind - or any Meta product.
[12]
Meta sued over smart glasses privacy claims -- 6 changes you should make right now
A bombshell investigation revealed that workers in Kenya are watching intimate footage captured by Meta's smart glasses. Here's what you actually need to know -- and what you can do about it Last month, I had a lot of fun watching the Super Bowl and translating the Halftime show in real-time while wearing my Ray Ban Meta Display smart glasses. But like others who either own a pair or are thinking about buy them, a new report has understandably set off alarm bells. A joint investigation by two Swedish newspapers found that human contractors in Nairobi, Kenya, are reviewing footage recorded by the glasses, including some deeply private moments: people undressing, using the bathroom and more. Now Meta is facing a class-action lawsuit over privacy concerns exposed by this report. The lawsuit essentially argues that Meta released a "surveillance nightmare" disguised as fashion, failing to provide adequate guardrails to prevent real-time doxing and unauthorized biometric data collection. It's a lot to take in. But before you panic or toss your glasses in a drawer, it's worth separating what's actually happening from what's being exaggerated. Here's a clear-eyed breakdown. TL:DR Some footage from Meta Ray-Ban glasses is reviewed by human contractors as part of AI training. Users have very limited ability to opt out. The risks are real -- but they're also specific. This is not the first time users have raised privacy concerns. Here's what you can actually do: * Check your privacy settings * Disable cloud processing for photos and videos * Understand the voice recording situation * Disable "Hey Meta" if you don't use it * Be mindful about when you use AI features * Don't leave the glasses on or recording unattended What data Meta actually collects Understanding the privacy concern starts with understanding what the glasses actually capture and send -- and what they don't. The glasses are not always recording. They only activate when you tap the camera button or trigger a voice interaction with the "Hey Meta" wake word. That said, here's what happens once you do. First and foremost, it's important to know that when someone with Meta glasses is recording, you will see a light on the actual glasses. Photos and videos you take are stored on your phone by default. They are only sent to Meta's servers if you actively share them with Meta AI, upload them to Facebook or Instagram, or turn on cloud processing in settings. Voice recordings triggered by "Hey Meta" are a different story. Since a policy update in April 2025, these are stored in Meta's cloud by default -- and you can no longer opt out of that storage. Meta says recordings are kept for up to one year to improve its AI products. Any visual content you share with the Meta AI assistant -- by asking it to analyze what it sees, for example -- is also eligible for use in AI training and product improvement. In short: photos and videos stay local unless you share them. But your voice interactions are going to Meta's servers no matter what, and there's currently no way to stop that. What Meta workers are actually seeing -- and why? This is the part of the story that has shocked most people, and understandably so. Meta, like most major AI companies, uses human contractors -- called data annotators -- to review and label footage as part of training its AI models. It's a standard industry practice, but it requires real humans to watch real footage, and that footage doesn't always get filtered before it reaches them. According to the reports, contractors at Sama, a Kenyan subcontracting firm, say some of the footage they're asked to review includes: * People using the bathroom or changing clothes * Users' bank card details captured mid-transaction * Sexual content, either viewed or recorded by the wearer * Footage of people in their bedrooms, captured after a wearer set down their glasses without turning them off One contractor told the Swedish papers: "In some videos you can see someone going to the toilet, or getting undressed. I don't think they know, because if they knew they wouldn't be recording." Why does this happen? Because when users interact with Meta AI -- saying "Hey Meta, what am I looking at?" or asking it to analyze a scene -- that footage can be flagged and sent for human review. The content isn't being recorded behind users' backs; it's footage that users themselves triggered, but often without realizing it would be seen by a human being overseas. Meta's own terms of service do allow for this. They state that the company can "review your interactions with AIs" via "automated or manual (human) review." But that language is buried deep, and most users have never read it. Perhaps the most disturbing part of the investigation was former Meta employees confirming that the anonymization does not always work -- faces sometimes remain visible to the Meta workers, particularly in difficult lighting conditions. Fact vs. fiction: Clearing up the confusion A lot of misinformation has spread alongside this story. Here's what's true and what isn't. The claim: Meta is constantly recording everything through the glasses The reality: Only footage you actively share with Meta AI -- by using voice commands or asking the AI to analyze a scene -- is sent to Meta's servers and potentially reviewed. The glasses aren't always recording. But when you use AI features, that data can reach human reviewers -- something most users don't realize. The claim: You can fully opt out of data collection The reality: Voice recordings are stored in Meta's cloud by default with no opt-out. They can be kept for up to one year. Since April 2025, Meta removed the ability to opt out of voice recording storage. You can delete recordings manually, but you can't stop the initial collection. The claim: Only automated systems review your footage -- no humans see it The reality: Footage shared with Meta AI can be reviewed by human contractors overseas, as explicitly permitted in Meta's AI terms of service. Meta's own terms of service allow for human review of AI interactions. The Swedish investigation confirmed this is happening. The claim: Any photo or video taken with the glasses automatically goes to Meta. The reality: Photos and videos you take stay on your phone unless you actively share them with a Meta service. The glasses are not always-on surveillance cameras. Media your record stays local unless you share it. The concern centers on AI voice interactions and what happens when you actively use the AI assistant features. What you can do right now If you own a pair of Meta Ray-Ban smart glasses, here are concrete steps you can take to reduce your exposure. * Check your privacy settings. Open the Meta View app, go to Settings > Privacy, and review what data sharing options you have enabled. Turn off anything you didn't intentionally opt into. * Disable cloud processing for photos and videos. In Settings, you can turn off cloud processing for media. This keeps photos and videos on your device rather than sending them to Meta's servers. * Understand the voice recording situation. You cannot opt out of voice recording storage -- that option was removed in April 2025. However, you can manually delete your recordings at any time through the Meta AI app. Get in the habit of clearing them regularly. * Disable "Hey Meta" if you don't use it. If you're not using the voice assistant features, disabling the wake word entirely is the most effective way to prevent voice data from being collected. You can still use the glasses for photos and calls without it. * Be mindful about when you use AI features. Using Meta AI to analyze a scene -- asking what something is, or getting real-time assistance -- is when footage is most likely to be flagged for review. Think twice before using these features in private settings. * Don't leave the glasses on or recording unattended. Several of the most disturbing incidents described by contractors involved footage captured after the wearer set the glasses down without turning them off. Make it a habit to power down the glasses when you take them off. What Meta says According to the reports, a Meta spokesperson offered a brief response: "When live AI is being used, we process that media according to the Meta AI Terms of Service and Privacy Policy," and directed reporters to those documents. Meta has not disputed the findings of the investigation. The company's privacy page notes that users can manage data sharing in settings, and its terms do acknowledge the possibility of human review -- but it does not specify where that review takes place or who carries it out. Tom's Guide has reached out to Meta for additional comment and will update this article if we receive a response. The takeaway Storires like this are not new and probably will continue as AI becomes more sophisticated and further integrated into our lives. But what makes the Meta Ray-Ban situation particularly acute is the combination of factors: wearable cameras that can record without drawing obvious attention, AI features that trigger data sharing, human forgetfulness to turn off the product, inadequate user awareness and a near-total inability to opt out once you've chosen to use those features. Taken together, those factors show how easily convenience can outpace caution when AI becomes part of the devices we wear every day. Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.
[13]
'You can see someone going to the toilet, or getting undressed' -- contractors warn your Meta AI glasses might see more than you realize
* Meta contractors claim your smart glasses can see more than you think * Meta's privacy policy does warn that your glasses share images and videos with the company * This follows a growing trend of privacy concerns over smart glasses in public and in courts When Meta warned us that it could see footage captured by its AI smart glasses, it turns out it wasn't kidding. As part of a new investigation, Meta insiders claim to have seen intimate details of our lives, from bank cards to filmed sex scenes. In a joint investigation published by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten (behind a paywall), Meta contractors told journalists they're seeing a lot of sensitive data. This includes "someone going to the toilet, or getting undressed", with one contractor noting they saw a video where "a man puts the glasses on the bedside table and leaves the room. "Shortly afterwards his wife comes in and changes her clothes." Even though they realize the sensitive nature of the content they're analyzing, the staff claim they're not in a position to push back on what's happening, saying: "You are not supposed to question it. If you start asking questions, you are gone." When you agree to use Meta's AI, you'll see a warning that as part of its terms of use, you agree to let the company see and "review your interactions with AIs, including the content of your conversations." This is buried in the full TOS agreement, but a similar warning flashes on screen as part of the smart glasses set up process. The trouble is, even if you'd rather not share anything with Meta's team you don't have much of a choice. To use the AI, you have to allow data sharing, otherwise you're locked out of the features. What's more, given the compact size of Meta's specs, there isn't much room for on-device processing. AI requests and data are sent to a server -- meaning even if you make the information private, it's near impossible to prevent it being shared with Meta in some capacity. But Meta might need to find a solution. The beginning of the blowback I've noted previously that Meta's smart specs have so far managed to dodge the privacy fears that plagued Google Glass, but recently that's changed. This report isn't the only example of a changing sentiment towards smart glasses. Earlier this year the BBC reported on cases of women being filmed secretly and harassed by people wearing smart specs, and the judge in the ongoing social media addiction trial against Meta (and YouTube) threatened Mark Zuckerberg's entourage with contempt after members wore smart glasses into the courtroom despite recording being banned (via Fortune). There are also growing concerns over expanded tools Meta and others want to bring to their AI wearables. Facial recognition, and even something mundane like remembering where you left your keys would require your specs to capture a lot of data that many (myself included) aren't very comfortable with. There are also growing concerns over what data is and isn't shared with AI, with smartphone manufacturers making a big deal over on-device AI -- models that are small enough to live on your phone, meaning data is never sent to a server. With Apple and Samsung said to be working on their own smart specs, there is room to leverage their phone's on-device AI for a privacy win. Their smart glasses could use your phone's AI for many tasks, and only use a server when necessary -- giving them improved offline functionality, but also some added security for your data. Meta, without a phone of its own, doesn't have the same luxury of on-device AI to push back on the privacy argument. One potential solution to Meta's woes would be greater user privacy control. Messages and some specific images taken by the glasses for context will need to be shared with Meta, but there should be an option to not share content captured outside of the Meta glasses' Look and Ask feature. And as the AI needs to analyze more and more data to make tools work, Meta may want to implement something similar to Apple's Private Cloud Compute, which serves as private server for Apple Intelligence. Because even if people are agreeing to their data being shared, let's be honest, most of them don't realize what they're signing away. And when they see stories about Meta contractors apparently seeing them in the bathroom, they'll understandably get scared and want to switch to a different platform. With Android XR expected to step into gear this year, those alternatives might be here soon, and if they can crack AI privacy in a way Meta hasn't, I can see plenty of folks jumping ship. I know I will. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course, you can also follow TechRadar on YouTube and TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
[14]
Disturbing Report Says Workers are Watching Private Footage Taken on Meta Smart Glasses
An investigation has alleged that footage taken on Meta's AI smart glasses is being watched by tech workers, including intimate moments. Swedish newspapers Svenska Dagbladet and Göteborgs-Posten report that some of the footage recorded by the smart glasses is sent to contractors in Nairobi, Kenya, for review. The workers are employed by Sama as data annotators. Their role is to label images and videos so that Meta's AI systems can better interpret visual information. Several workers told the newspapers that they had viewed highly personal material. "In some videos you can see someone going to the toilet, or getting undressed," one contractor says. "I don't think they know, because if they knew they wouldn't be recording." Others describe seeing nudity, sexual activity, and financial information such as credit card details. "We see everything -- from living rooms to naked bodies," a worker claims. Another says, "There are also sex scenes filmed with the smart glasses -- someone is wearing them having sex." Some of the workers say they feel unable to refuse assignments. "You understand that it is someone's private life you are looking at, but at the same time you are just expected to carry out the work," one employee tells the reporters. "You are not supposed to question it. If you start asking questions, you are gone." Meta's terms state that the company may "review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review can be automated or manual (human)." The policy also advises users: "Do not share information that you don't want the AIs to use and retain, such as information about sensitive topics." Futurism notes that the Swedish journalists were ignored by Meta for two months until receiving the following terse reply: "When live AI is being used, we process that media according to the Meta AI Terms of Service and Privacy Policy." In response to the report, the U.K. data watchdog says it has written to Meta. "Service providers must clearly explain what data is collected and how it is used," the Information Commissioner's Officer tells the BBC. "The claims in this article are concerning. We will be writing to Meta to request information on how it is meeting its obligations under UK data protection law." The revelation comes amid a period of success for Meta Ray-Bans, with some reports suggesting the company sold over seven million pairs in 2025 alone. But aside from user-generated footage being reviewed by third parties, some users have also been invading other people's privacy. Even Mark Zuckerberg fell foul of a judge for wearing a pair.
[15]
Meta workers forced to review intimate videos taken by Ray-Ban smart glasses
More AI features = more human review. Credit: Bloomberg / Contributor / Bloomberg via Getty Images The things you record with your AI-powered Meta Ray-Ban glasses -- yes, even those intimate moments where you think you're alone -- are probably being seen by strangers. An investigation by Swedish outlets Svenska Dagbladet and Göteborgs-Posten found that offshore Meta workers in Kenya were asked to analyze intimate and even "disturbing" videos taken by glasses wearers, including videos taken in bathrooms, footage featuring nudity and sexual content, and images showing personal information like bank accounts. It's part of a process known as data labeling, used to train AI models with footage first reviewed and annotated by humans so that the AI can understand what it's "looking" at. Workers told the publication that many of the videos appear to be moments captured when users weren't aware they were being recorded. The group works under Sama, the same Meta contractor facing a class action lawsuit on behalf of content moderators who allege they have been exploited and forced to review traumatic content without proper working conditions. "You understand that it is someone's private life you are looking at, but at the same time you are just expected to carry out the work. You are not supposed to question it. If you start asking questions, you are gone," one employee told the publications. Meta's Terms of Service reserves the right to send users' interactions with its AI services, including its always-on live AI features, to human moderators -- the company referred to this policy when asked for comment by the news outlets. The Meta Ray-Ban smart glasses collaboration initially launched in 2023 to mixed reviews about its photo and video capabilities and AI features. Meta released the upgraded AI-powered Meta Ray-Ban Display model in September, complete with a new Neural Band interface and promises of AI assistant integrations that would turn them into glasses of the future. Sales of the glasses tripled in 2025, CNBC reported, with more than 7 million units sold. But in the months since, Meta's wearable eye camera device has received widespread blowback, following a rise in influencer content depicting Meta glasses wearers secretly recording and even harassing unsuspecting strangers. Wearers have deduced ways to obscure the glasses' always-on recording light, intended to alert the public when a user is taking video, and instead turned the smart device into a tool for viral pickup artists and pranksters. In addition to concerns about personal consent, the device has prompted worries about a fast-growing web of surveillance and facial recognition tech, which Meta has previously come under fire for. The company later said it was moving ahead with live AI features, including potential facial recognition, in 2025 -- with the upgrade, a device "always keep its cameras and sensors turned on and use AI to remember what its wearer encountered throughout a day." Privacy advocates also warn the technology could one day be harnessed by third parties, including the federal government's own militarized police forces.
[16]
People Are Calling Meta Ray-Bans "Pervert Glasses"
Can't-miss innovations from the bleeding edge of science and tech In an alarming investigation, Swedish newspapers Svenska Dagbladet and Göteborgs-Posten revealed that highly sensitive videos recorded by users' Meta Ray Ban smart glasses are being sent to the company's subcontractors in Nairobi, Kenya, for data annotation. Contractors told the newspapers that they were watching people "going to the toilet, or getting undressed," often not knowing that they were even recording or being recorded. Automated systems designed to blur faces often failed, the contractors claimed, effectively giving them a front row seat of somebody's most intimate moments. "I don't think they know, because if they knew they wouldn't be recording," one contractor said It was a disturbing revelation, highlighting how tech companies are still heavily reliant on human labor to sift through highly personal data and label what they see to train AI models, a hidden cost that the industry seemingly isn't keen on discussing in public. Meta's use of human annotators could also result in data falling into the wrong hands, a considerable liability that could put its customers at risk. The topic of surveillance has been top of mind in the AI industry lately. Anthropic CEO Dario Amodei identified mass surveillance of Americans as one of two "red lines" he's unwilling to cross in his brewing fight with the Department of Defense. And the militarization of law enforcement in the United States has garnered outrage, with agencies such as Immigration and Customs Enforcement accessing shadowy databases and scouring social media for identifying data. The latest news about Meta's subcontractors watching videos from smart glasses triggered a furor among users on social media, many of whom were already wary of the possibility of having somebody secretly recording them using a pair of Meta's unassuming-looking glasses. Many have quickly embraced a term for the devices that's presumably sending Meta CEO Mark Zuckerberg into paroxysms of fury: "pervert glasses." "I'm taking a brave stance that may get me canceled: there is no reason for the pervert glasses to exist," one user wrote. "Glad people are settling on the term 'pervert glasses,'" another agreed. "Bonus points if you also say it while posting a picture of Mark Zuckerberg or call them Mark Zuckerberg's pervert glasses." "Not a fan of Mark Zuckerberg's pervert glasses," wrote yet another. "I would prefer technology to make it more difficult to skeez, creep or perv on the world. I would like tech to protect me from creeps, not smooth the runway for them." Others pointed to how Microsoft recently banned the pejorative term "Microslop" on one of its Discord channels -- only to end up shutting down the whole server after the backlash grew exponentially. "Btw, this is why we have to keep saying 'pervert glasses' until our Facebook aunties start calling them that too," one user argued. The threat of simple smart glasses being used for surveillance isn't some dystopian vision of a far future. Researchers have already shown that Meta's smart glasses can be used to instantly reveal the identities of strangers in public -- tech that Meta has reportedly been working on itself. All told, smart glasses are a product category rife with uncomfortable connotations, especially when worn in the presence of others who may not know they may be recorded. "Fashion aside, these devices are in a fraught place," Wired's Boone Ashworth wrote in his November review of Meta's second-generation smart glasses. "Privacy rights and the absolute explosion of surveillance tech are much harder to ignore these days." "I'm not saying these are glasses for creeps, but I can't help but feel like one while wearing them," he admitted.
[17]
Inside the Ray-Ban Smart Glasses Controversy Plaguing Meta - Decrypt
UK regulators are seeking information on Meta's data protection practices. A Nairobi-based data firm said it has reviewed sensitive footage captured by Meta's Ray-Ban smart glasses after the tech giant tapped the Kenyan company for AI training offshore. "In some videos, you can see someone going to the toilet, or getting undressed," an unnamed source told reporters last week. "I don't think they know, because if they knew, they wouldn't be recording." The claims come from a joint investigation published by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten on Friday. John Davisson, Deputy Director of Enforcement at the Electronic Privacy Information Center, said the technology raises broader concerns about how wearable devices collect and use personal data, particularly in a public setting. "The wearer of the glasses cannot consent on behalf of all of the people they are encountering as they go through the world using these glasses," Davisson told Decrypt. "Whether in public places or in private places, at places like locker rooms, restrooms, or other intimate spaces." Davisson also said training AI systems on that footage increases risk because the data can include identifiable faces, voices, and other personal information. "You are compounding the privacy and data protection concerns, because you're taking people's personal information and using it to build your own model," he said. Davisson said reports of the glasses recording people in intimate situations do not surprise him and suggested companies may attribute such recordings to false activations or other technical explanations. "But the fact remains that they are capturing sensitive information that no reasonable consumer would want their smart glasses to capture," he said. The UK's Information Commissioner's Office told BBC News on Wednesday that it will contact Meta to request information about how the company complies with UK data protection law, and said that devices processing personal data, including smart glasses, should provide transparency and allow users to maintain control over their data. Meta's smart glasses, developed with eyewear brand Ray-Ban and first announced in 2023, allow users to record first-person video, ask questions about their surroundings, and interact with Meta's AI assistant. More than 7 million pairs were sold in 2025, up from a combined 2 million units sold in 2023 and 2024, according to a report by CNBC last month. Footage recorded by the glasses can be sent to human contractors who review and label the material used to train AI systems, according to Meta AI's terms of service. "In some cases, Meta will review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review may be automated or manual (human)," per Meta's terms. Meta claims it cannot read or access shared messages when a user disseminates private information with friends, family, and AIs using processing technology. Still, the company may utilize user content and related information through automated systems, human review, or through third-party vendors to improve its services and conduct research. It may also conduct research on a user's content to ensure compliance with its own policies and applicable laws, while removing content that violates its rules, the company claims. Svenska Dagbladet and Göteborgs-Posten's investigation identified the Nairobi-based data subcontractor for Meta as Sama, which employs workers in Nairobi to train AI systems by manually annotating data, including videos, images, and speech, for Meta's AI services. "Every image must be described, labelled and quality assured," the report said. "All to make the next generation of smart glasses a little more intelligent, a little more human." Workers told the Swedish newspapers they reviewed footage that included people using the bathroom, changing clothes, credit card numbers, and explicit sexual activity. "There are also sex scenes filmed with the smart glasses. Someone is wearing them, having sex. That is why this is so extremely sensitive," a contractor told reporters. "There are cameras everywhere in our office, and you are not allowed to bring your own phones or any device that can record." Contractors also told the newspapers they felt unable to question the assignments for fear of losing their jobs. "When you see these videos, it feels that way. But since it is a job, you have to do it," another said. "You understand that it is someone's private life you are looking at, but at the same time, you are just expected to carry out the work. You are not supposed to question it. If you start asking questions, you are gone."
[18]
'We see everything': Report says Meta's AI smart glasses footage is reviewed by human contractors who see far more than they bargained for, which has led to a new lawsuit against the company
'People can record themselves in the wrong way and not even know what they are recording. They are real people like you and me.' Meta's Ray-Ban AI smart glasses have been at the center of many privacy concerns since their release, particularly as the data the glasses are capable of capturing can be sent back to Meta for training purposes. In a joint report, Swedish newspapers Svenska Dagbladet and Göteborgs-Posten spoke to workers of Sama, a Kenya-based subcontractor that is claimed to provide human-led data annotation for video and audio captured by the Ray-Ban Meta glasses (via Ars Technica). According to the authors, several of the workers they spoke to reported seeing extremely private footage, and that wearers of the glasses may be unaware their private lives are being recorded for human review. "We see everything -- from living rooms to naked bodies. Meta has that type of content in its databases", said one of the workers. "Someone may have been walking around with the glasses, or happened to be wearing them, and then the person's partner was in the bathroom, or they had just come out naked. "People can record themselves in the wrong way and not even know what they are recording. They are real people like you and me." When asked if the employee felt like they were looking straight into other people's private lives, they said: "When you see these videos, it feels that way. But since it is a job, you have to do it. You understand that it is someone's private life you are looking at, but at the same time you are just expected to carry out the work. "You are not supposed to question it. If you start asking questions, you are gone." It's not just video footage that sub-contractors are said to be expected to review. The microphones used to record voice requests also send transcriptions back for processing and training purposes. "It can be about any topics at all", the employee continued. "We see chats where someone talks about crimes or protests. It is not just greetings, it can be very dark things as well." Speaking to the BBC, Meta said that subcontracted workers might sometimes review content for the purpose of improving "the experience", and provided a link to its Supplemental Meta Platforms Technologies Terms of Service agreement. The policy states that photos and videos taken with the glasses are sent to Meta when cloud processing is turned on, and that you can "change your choices about cloud processing of your media at any time in [the] settings". Since the publication of the report, a new class action lawsuit has been filed against the company in the United States, alleging that Meta violated privacy laws and engaged in false advertising with its slogans. "No reasonable consumer would understand 'designed for privacy, controlled by you' and similar promises like 'built for your privacy' to mean that deeply personal footage from inside their homes would be viewed and catalogued by human workers overseas. "Meta chose to make privacy the centerpiece of its pervasive marketing campaign while concealing the facts that reveal those promises to be false", the complaint alleges. In a statement to Techcrunch, a Meta spokesperson said: "Ray-Ban Meta glasses help you use AI, hands-free, to answer questions about the world around you. Unless users choose to share media they've captured with Meta or others, that media stays on the user's device. "When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people's experience, as many other companies do. We take steps to filter this data to protect people's privacy and to help prevent identifying information from being reviewed."
[19]
Meta faces lawsuit over AI smart glasses privacy breach
Meta is facing a new lawsuit over privacy concerns related to its AI smart glasses. It follows the Swedish newspapers Svenska Dagbladet and Göteborgs-Posten (GP) recent report that employees at a Kenya-based subcontractor had been reviewing private footage recorded through customers' smart glasses. This included sensitive content such as nudity, using the toilet, sex, bank card information, private messages and chats. The United Kingdom's data watchdog, the Information Commissioner's Office, decided to investigate the matter, which then led to a US lawsuit by plaintiffs Mateo Canu of California and Gina Bartone of New Jersey, who are being represented by Clarkson Law Firm, which specialises in public interest cases. The US lawsuit claims that Meta has promoted false advertising and disregarded privacy laws. It alleges that Meta's AI smart glasses use phrases such as "designed for privacy, controlled by you" in their advertising, which may reassure users to believe that their private moments and data are safe from public view. It also alleges that Meta has not included any disclaimer to the contrary. Similarly, the glasses manufacturing partner, Luxottica of America, has also been named by the lawsuit for conduct that goes against consumer protection laws. However, Meta's UK AI terms of service has a mention of human review. A version of that policy also applies to the US and states: "In some cases, Meta will review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review may be automated or manual (human). The subcontractor in question is Sama, a Nairobi-based data annotation company, where workers train AI systems manually by describing, labelling, and quality assessing images. Euronews Next has contacted Meta for comment but did not receive a reply at the time of publication. Although Meta claims that faces are usually blurred in images, sources who spoke to Svenska Dagbladet have highlighted that it does not consistently work. "We see everything- from living rooms to naked bodies," one of the subcontractor's workers said. According to Meta, subcontracted workers sometimes need to review customer content, including images and videos, to improve the smart glasses' experience. However, the tech giant maintained that it took customer privacy very seriously. "Ray-Ban Meta glasses help you use AI, hands-free, to answer questions about the world around you," Meta said in a statement published by TechCrunch. "When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people's experience, as many other companies do. We take steps to filter this data to protect people's privacy and to help prevent identifying information from being reviewed." Concerns over "luxury surveillance" tech have been increasing significantly over the last few years.
[20]
Meta Lied About Its Smart Glasses Protecting User Privacy, New Class Action Lawsuit Claims
Can't-miss innovations from the bleeding edge of science and tech Meta may have sold seven million of its Ray-Ban smart glasses in 2025 alone -- but likely didn't anticipate the outpouring of criticism when a recent investigation by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten revealed that Meta's subcontracted data annotators in Nairobi, Kenya, could've been watching users through their glasses' cameras as they went to the bathroom or had sex. The damning revelations shed light on the AI industry's reliance on overseas labor for data labeling to train their models, a hidden reality glossed over in marketing materials by one of the biggest tech companies in the world. Just days after the investigation was published, Meta has been hit with a class action lawsuit, which accuses the company of woefully misleading its customers by claiming that it had put privacy front and center. "No reasonable consumer would understand 'designed for privacy, controlled by you' and similar promises like 'built for your privacy' to mean that deeply personal footage from inside their homes would be viewed and catalogued by human workers overseas," reads the lawsuit, which was obtained by Futurism and filed in a San Francisco district court on Thursday. "Meta chose to make privacy the centerpiece of its pervasive marketing campaign while concealing the facts that reveal those promises to be false," the lawsuit charges. The lawsuit "seeks to hold Meta responsible for its affirmatively false advertising and failure to disclose the true nature of surveillance and its connection to the company's AI data collection pipeline. A Meta spokesperson told Engadget that data from its glasses may end up in the hands of human contractors, but declined to respond to the lawsuit's claims. The spokesperson also claimed that "unless users choose to share media they've captured with Meta or others, that media stays on the user's device." However, what Meta fails to explain is that using the devices' core AI features without authorizing human contractors in Kenya to watch the resulting footage is impossible. The lawsuit claims Meta did not adequately disclose that intimate footage could be reviewed and annotated by a human contractor. In other words, its smart glasses represent a major privacy liability. "The undisclosed human review pipeline renders the Meta AI Glasses' privacy features materially misleading, transforms the product from a personal device into a surveillance conduit, and exposes consumers to unreasonable risks of dignitary harm, emotional distress, stalking, extortion, identity theft, and reputational injury," the document reads. "The exposure of such content to thousands of unknown individuals creates a persistent and unreasonable risk of harm that Meta's marketed privacy features were represented to, but do not, prevent," it continues. Beyond the lawsuit, the latest revelations have resulted in netizens coining a new term for Meta's product: "pervert glasses."
[21]
Meta Employees Are Seeing R-Rated Footage Footage From Its Users' AI Glasses
These employees are seeing some things they wish they weren't, which brings privacy into question. "AI smart glasses raise significant privacy concerns," Kleanthi Sardeli, data protection lawyer at non-profit None Of Your Business, previously told Reuters. "The main issues are linked to the use of people's personal data to train AI models and transparency for bystanders." Meta's Data Annotators Are Viewing Private Content "In some videos you can see someone going to the toilet, or getting undressed," a contractor for company Sama told Swedish newspapers Svenska Dagbladet and Göteborgs-Posten. "I don't think they know, because if they knew they wouldn't be recording."
[22]
Meta Workers Say They're Seeing Disturbing Things Through Users' Smart Glasses
Can't-miss innovations from the bleeding edge of science and tech Meta's Ray Ban AI glasses have shot up in popularity in recent years, selling over seven million pairs in 2025 in a considerable jump over the two million it sold in 2023 and 2024 combined. While the smart glasses have scored big with consumers, allowing them to record first-person footage through an integrated camera and microphone array, and analyzing the world around them through Meta's AI model, the hardware has sparked a heated debate. Critics say enabling facial recognition in the glasses' software could have dangerous implications, especially considering the militarization of law enforcement and Meta's abysmal track record when it comes to ensuring the privacy of users. And regardless of the wearer's intention, much of the footage being recorded by the glasses is being sent to offshore contractors for data labeling, a widely-used preprocessing step in training new AI models in which human contractors are asked to review and annotate footage. It's a laborious and highly resource-intensive process that tech companies often gloss over when discussing the prowess of their latest AI models. The reality can be messy. Meta contractors based in Nairobi, Kenya, told Swedish newspapers Svenska Dagbladet and Göteborgs-Posten in a recently published joint investigation that they're being told to review highly sensitive and intimate data. "In some videos you can see someone going to the toilet, or getting undressed," one contractor for a company called Sama said. "I don't think they know, because if they knew they wouldn't be recording." "I saw a video where a man puts the glasses on the bedside table and leaves the room," one data annotator told the newspapers. "Shortly afterwards his wife comes in and changes her clothes." Other footage included imagery of people's bank cards, users watching porn, or even filming entire "sex scenes." An employee added that they felt forced to watch and annotate or else risk losing their job. "You understand that it is someone's private life you are looking at, but at the same time you are just expected to carry out the work," the employee said. "You are not supposed to question it. If you start asking questions, you are gone." Buried in Meta's AI terms of use, the company reserves the right to have the company "review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review can be automated or manual (human)." The document also warned that users shouldn't share information that "you don't want the AIs to use and retain, such as information about sensitive topics." But given the kind of information data annotators are being asked to review, many users don't appear to be aware of that last piece of advice. Worst of all, owners of Meta's AI glasses simply don't have the option of making use of the AI features without agreeing to share data shared with Meta's remote servers. And once the data is sent, it's already often too late. "Once the material has been fed into the models, the user in practice loses control over how it is used," non-profit None Of Your Business data protection lawyer Kleanthi Sardeli told the Svenska Dagbladet and Göteborgs-Posten. After two months of no replies, a Meta spokesperson referred the two Swedish newspapers to its terms of use and privacy policy. "When live AI is being used, we process that media according to the Meta AI Terms of Service and Privacy Policy," the spokesperson said, in a terse statement. It's not just Meta using offshore data annotators in countries like Kenya, Colombia, and India to train their AI models. As Agence France-Presse reported last year, workers have had to put up with reviewing often gruesome crime scene images, and even dead bodies. The trend is reminiscent of social media content moderation, a practice that has relied on exploitative labor in the developing world for many years now. But with the advent of AI and wearable tech that can easily be used to record high-resolution footage simply by tapping a capacitive button next to your temple, the hidden human cost of data labeling has taken on a whole new meaning. It's a reality Meta would much prefer to bury in lengthy terms of service that likely only a handful will take the time to read. "You think that if they knew about the extent of the data collection, no one would dare to use the glasses," one annotator told the newspapers.
[23]
Meta Sued for Violating the Privacy of Its AI Smart Glasses Users
A California-based law firm is suing Meta for violating the privacy of the users of Meta's AI glasses. This lawsuit follows an investigation by Swedish publications that revealed how Meta is violating user privacy, often disclosing their sensitive and confidential information to third-party contractors. "In some videos, you can see someone going to the toilet or getting undressed. I don't think they know, because if they knew, they wouldn't be recording," said one of the anonymous data annotators who works with content captured by Meta's AI glasses, revealed in an investigation by Swedish newspapers. This investigation by Svenska Dagbladet (SvD) and Göteborgs-Posten (GP), in collaboration with a Kenya-based journalist, has found that Meta's Ray-Ban smart glasses send sensitive private data to workers in Kenya, where they have to view footage secretly recorded by Meta's AI glasses, including intimate footage from users' homes. In this context, the US law firm filed the petition against Meta for its deceptive marketing practices and for failing to disclose to users that Meta's employees and contractors will review their personal information, including intimate content, thereby violating users' privacy. According to the news report, Meta subcontracts various companies worldwide to process sensitive user data. Apparently, Meta's former employees also confirmed that it processes "live data". The above-mentioned news publications have spoken to over 30 data annotators at Sama, a San Francisco-based company with its operations in Nairobi, Kenya. These workers draw boxes around objects such as flowerpots and traffic signs and label them with names and descriptions. Because of extensive worker confidentiality agreements that could impact their livelihood, the publication didn't identify the workers. Among the data they label, some of the content contains video material showing people's visits to the bathroom, bank-related details, having sex and other intimate moments. The report says the content appears "straight out of Western homes." They claim that leaks of secret content from Ray-Ban Meta smart glasses could trigger "enormous scandals". The following quotes from the news report by the data annotators highlight the gravity of this issue: Workers also said the anonymisation does not work as intended and that several faces remain visible, especially in poor lighting conditions. After two months of sending the questions, Meta spokesperson Joyce Omope in London responded: "When live AI is being used, we process that media according to the Meta AI Terms of Service and Privacy Policy." However, their response does not directly address filters that prevent sensitive private information from reaching third-party data annotators. Meanwhile, Sama, Meta's subcontractor, did not respond to questions. Another unnamed European Meta executive told reporters, "Many believe that data must be stored within the EU to be protected. But under GDPR, it does not matter where the server is located - as long as the country meets the EU's requirements. If it does not, data may not be sent there." The executive further stated that legal responsibility lies with Meta Ireland and that "where the data is actually processed - in Europe or in the US - does not change the regulatory framework. " For context, Meta has its European headquarters in Ireland. After the publication of this report, speaking to the BBC, Meta said that its subcontracted workers might sometimes review content captured by AI glasses, including videos and images, to improve the experience. Sweden's Privacy Protection Authority (IMY), reportedly called Meta to a meeting regarding the data processing of Meta AI glasses handled in Kenya, outside of the EU's jurisdiction. Örjan Rodhe, press officer at the Privacy Protection Authority, said that they are in contact with the Irish Data Protection Commission (DPC). Legally, Meta clubs markets its products under the Meta Platforms Technologies brand. These include wearable tech such as Meta's Virtual Reality (VR) products, like Oculus, and Meta's Ray-Ban AI glasses. While Meta's terms of service for AI glasses don't specify human review of user interactions with AI, the UK version states that it will review such interactions. It further clarified that automated systems, human reviewers or third-party vendors can conduct this review. It says that it does this to: Further, it justifies the data storage capabilities required to act on users' submission reports. Without specifying the data retention period, their terms say, "When you use Worlds, the last few minutes of your and other users' most recent audio, video and other interactions will be recorded in case you want to report anything you've encountered. These recordings may be stored on our servers. We do not review these recordings unless you submit a report. If you don't submit a report, the recordings will be deleted on a rolling basis." 'World' here refers to Meta's virtual environment.
[24]
Meta Ray-Ban Smart Glasses Face Privacy Alarm: What Users Should Do Right Now
The investigation found that a Kenya-based Meta subcontractor had access to intimate and disturbing videos taken by glasses wearers. This includes explicit content and even personal information like bank account details. The publication revealed that many of the videos were used for data labeling, which is a method to train AI models. It further confirmed that the users weren't aware about recorded material. Pavan Karthick M, a Threat Researcher at Bengaluru-based cybersecurity firm CloudSEK, said: "Especially for things like these, Ray-Ban glasses, there is much more data that goes into that, because you're wearing it all the time. It can hear what you speak, and it can also see what you see. They try to collect as much data from you as possible in terms of usage statistics and how the device is working." The revelations have sent shockwaves across users and regulators regarding data privacy.
Share
Share
Copy Link
Meta's AI-powered Ray-Ban smart glasses are under fire after a Swedish investigation revealed that contractors in Kenya reviewed sensitive user footage, including bathroom visits and intimate moments. The tech giant now faces a class-action lawsuit in the U.S. and scrutiny from the UK's Information Commissioner's Office over alleged privacy violations and false advertising.
Meta smart glasses are facing intense scrutiny after a Swedish investigation uncovered that human reviewers at a Kenya-based subcontractor watched sensitive user footage captured through the AI-powered wearables. The report by Svenska Dagbladet, Göteborgs-Posten, and Kenyan journalist Naipanoi Lepapa interviewed over 30 employees at Sama, a company providing data annotation for AI systems, revealing that workers routinely viewed videos showing people using bathrooms, having sex, and other intimate moments
1
. "I saw a video where a man puts the glasses on the bedside table and leaves the room. Shortly afterwards, his wife comes in and changes her clothes," one anonymous Sama employee reported1
.
Source: Analytics Insight
The revelations prompted immediate legal action. A class-action lawsuit filed by the Clarkson Law Firm on behalf of plaintiffs Gina Bartone of New Jersey and Mateo Canu of California accuses Meta of violated privacy laws and false advertising
2
. The complaint challenges Meta's marketing claims that the Ray-Ban smart glasses were "designed for privacy, controlled by you" and "built for your privacy," arguing that customers were never informed that overseas workers would watch their intimate footage2
. The scale of the issue is significant: EssilorLuxottica sold over 7 million units of the AI-powered glasses in 2025 alone, more than tripling combined sales from 2023 and 20244
.Britain's Information Commissioner's Office (ICO) launched a UK privacy probe into Meta's practices, describing the allegations as "concerning"
5
. The regulator is writing to Meta to request information on how the company meets its obligations under UK data protection law and GDPR requirements5
. The investigation raises critical questions about cross-border data flows, as companies transferring personal data to contractors outside the EU must ensure information is protected through approved safeguards5
.The Kenya-based contractors perform data annotation for AI systems, labeling objects, transcribing audio, and reviewing video content to train Meta's AI models to recognize real-world scenes and respond accurately to user queries
4
. "We see everything -- from living rooms to naked bodies," one worker revealed4
. While Meta claims faces in annotation data are automatically blurred, Sama workers reported this "does not always work as intended," with some faces remaining visible4
. Workers also noted seeing bank cards and personal paperwork inadvertently captured on camera4
.
Source: Decrypt
Related Stories
Meta confirmed it "sometimes" shares user content with contractors to review "for the purpose of improving people's experience, as many other companies do"
1
. The company's privacy policy for wearables states that photos and videos are sent to Meta when users turn on cloud processing, interact with Meta AI, or upload media to Facebook or Instagram1
. The terms and conditions note that "in some cases, Meta will review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review may be automated or manual (human)"1
. However, many users may not have read or understood these extensive privacy policies1
.Sama employees suggested that many Ray-Ban owners may be unaware their devices are recording, pointing to users inadvertently capturing bank cards or viewing sensitive content
1
. While the smart glasses flash a red light when recording, critics argue people may not notice or misinterpret its meaning1
. The controversy fuels broader debates about normalizing constant surveillance in everyday life, from unintentional recording of bystanders to facial recognition analyzing faces and surroundings3
. Melissa Ruzzi, director of AI at AppOmni, emphasized that "users in general do not read the user privacy and data security settings, and just click accept"3
. Some private companies are already banning smart glasses at work to prevent covert recording, while European lawmakers question whether these devices violate privacy legislation3
. The case highlights the challenge of regulating emerging technology in real time, particularly regarding wiretapping laws and consent requirements that vary by jurisdiction3
.
Source: Euronews
Summarized by
Navi
[2]
[5]
16 Mar 2026•Policy and Regulation

13 Feb 2026•Technology

01 Oct 2024•Technology

1
Policy and Regulation

2
Technology

3
Policy and Regulation
