3 Sources
3 Sources
[1]
Meta's AI display glasses reportedly share intimate videos with human moderators
Users of Meta's AI smart glasses in Europe may be unknowingly sharing intimate video and sensitive financial information with moderators outside of the bloc, according to a report from Sweden's Svenska Dagbladet released last week. Employees in Kenya doing AI "annotation" told the journalists that they've seen people nude, using the toilet and engaging in sexual activity, along with credit card numbers and other sensitive information. With Meta's Ray-Ban Display and other glasses with AI capabilities, users can record what they're looking at or get answers to questions via a Meta AI assistant. If a wearer wants to make use of that AI, though, they must agree to Meta's terms of service that allow any data captured to be reviewed by humans. That's because Meta's large language models (LLMs) often require people to annotate visual data so that the AI can understand it and build its training models. This data can end up in places like Nairobi, Kenya, often moderated by underpaid workers. Such actions are subject to Europe's GDPR rules that require transparency about how personal data is processed, according to a data protection lawyer cited in the report. However, Svenska Dagbladet's reporters said they needed to jump through some hoops to see Meta's privacy policy for its wearable products. That policy states that either humans or automated systems may review sensitive data, and puts the onus on the user to not share sensitive information. Meta declined to comment directly on the story, and simply said that "when live AI is being used, we process that media according to the Meta AI Terms of Service and Privacy Policy." To find out more, check out Svenska Dagbladet's detailed reporting on the subject.
[2]
Meta Ray-Bans send 'sensitive' videos to human data annotators
A new report says that video feeds from Meta Ray-Ban smart glasses are sent for review by human data annotators in Kenya, and that the footage includes sensitive content that is supposed to be excluded. Whistleblowers says that the video seen by third-party contractors used by Meta includes everything from people having sex to bank cards ... Meta Ray-Ban smart glasses can capture video in two ways. First, you can activate video recording manually in order to capture point of view footage. This can be a great hands-free way to record experiences like a roller coaster ride, as well as incidents that might occur while driving or cycling. Second, you can use the AI feature to ask questions about whatever you are looking at through the glasses. It's well understood that this AI processing is handled on Meta's servers and therefore that video footage needs to be sent to these for analysis. However, a report by Swedish site SVD says that footage is sent to human data annotators whose job it is to manually identify objects seen in these clips. A worker from a third-party contractor based in Kenya says that this footage sometimes includes very sensitive content. The workers in Kenya say that it feels uncomfortable to go to work. They tell us about deeply private video clips, which appear to come straight out of Western homes, from people who use the glasses in their everyday lives. Several describe video material showing bathroom visits, sex and other intimate moments [...] "Someone may have been walking around with the glasses, or happened to be wearing them, and then the person's partner was in the bathroom, or they had just come out naked", an employee says. The circumstances in which these sensitive videos are captured is very unclear from the report. For example, there is reference to people wearing the glasses while having sex, which would appear to be a very deliberate use. However, this would also seem to indicate that video footage is sent for review even when someone is manually recording rather than using Meta AI. There is definitely a lack of transparency about what footage is sent to Meta when using the AI function. For example, if you look at a car and ask Meta to identify the make and model, at what point does it cease sending footage? Is it five seconds later, 10 seconds, 30 seconds? Is it as soon as the question has been answered, or does it continue recording in case you ask further questions? The company's own terms of use are exceedingly vague. The terms state that "in some cases, Meta will review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review can be automated or manual (human)." SVP says that when it asked Meta for details, the company simply referred them back to the Terms of Service and Privacy Policy. Frustratingly, the report says that the site analyzed the network traffic to see what was being sent but then provides absolutely no insight. When we then analyse the network traffic from the app, we see that the phone has frequent contact with Meta servers in LuleΓ₯, Swden, and Denmark. Former Meta employees say that sensitive data isn't supposed to be sent for human review, but this relies on algorithmic identification of that sensitive data, which isn't always successful. I use the glasses myself. The novelty of the AI feature wore off rather quickly, but they're a very convenient way to shoot hands-free POV footage. Although I'd never use them to shoot anything sensitive, I would be pretty outraged to discover that Meta is capturing manual video recordings. The report is frustratingly lacking in hard information, but I guess it serves as a reminder to use any AI service with caution when it comes to sensitive data of any kind - or any Meta product.
[3]
Meta Workers Say They're Seeing Disturbing Things Through Users' Smart Glasses
Can't-miss innovations from the bleeding edge of science and tech Meta's Ray Ban AI glasses have shot up in popularity in recent years, selling over seven million pairs in 2025 in a considerable jump over the two million it sold in 2023 and 2024 combined. While the smart glasses have scored big with consumers, allowing them to record first-person footage through an integrated camera and microphone array, and analyzing the world around them through Meta's AI model, the hardware has sparked a heated debate. Critics say enabling facial recognition in the glasses' software could have dangerous implications, especially considering the militarization of law enforcement and Meta's abysmal track record when it comes to ensuring the privacy of users. And regardless of the wearer's intention, much of the footage being recorded by the glasses is being sent to offshore contractors for data labeling, a widely-used preprocessing step in training new AI models in which human contractors are asked to review and annotate footage. It's a laborious and highly resource-intensive process that tech companies often gloss over when discussing the prowess of their latest AI models. The reality can be messy. Meta contractors based in Nairobi, Kenya, told Swedish newspapers Svenska Dagbladet and GΓΆteborgs-Posten in a recently published joint investigation that they're being told to review highly sensitive and intimate data. "In some videos you can see someone going to the toilet, or getting undressed," one contractor for a company called Sama said. "I don't think they know, because if they knew they wouldn't be recording." "I saw a video where a man puts the glasses on the bedside table and leaves the room," one data annotator told the newspapers. "Shortly afterwards his wife comes in and changes her clothes." Other footage included imagery of people's bank cards, users watching porn, or even filming entire "sex scenes." An employee added that they felt forced to watch and annotate or else risk losing their job. "You understand that it is someone's private life you are looking at, but at the same time you are just expected to carry out the work," the employee said. "You are not supposed to question it. If you start asking questions, you are gone." Buried in Meta's AI terms of use, the company reserves the right to have the company "review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review can be automated or manual (human)." The document also warned that users shouldn't share information that "you don't want the AIs to use and retain, such as information about sensitive topics." But given the kind of information data annotators are being asked to review, many users don't appear to be aware of that last piece of advice. Worst of all, owners of Meta's AI glasses simply don't have the option of making use of the AI features without agreeing to share data shared with Meta's remote servers. And once the data is sent, it's already often too late. "Once the material has been fed into the models, the user in practice loses control over how it is used," non-profit None Of Your Business data protection lawyer Kleanthi Sardeli told the Svenska Dagbladet and GΓΆteborgs-Posten. After two months of no replies, a Meta spokesperson referred the two Swedish newspapers to its terms of use and privacy policy. "When live AI is being used, we process that media according to the Meta AI Terms of Service and Privacy Policy," the spokesperson said, in a terse statement. It's not just Meta using offshore data annotators in countries like Kenya, Colombia, and India to train their AI models. As Agence France-Presse reported last year, workers have had to put up with reviewing often gruesome crime scene images, and even dead bodies. The trend is reminiscent of social media content moderation, a practice that has relied on exploitative labor in the developing world for many years now. But with the advent of AI and wearable tech that can easily be used to record high-resolution footage simply by tapping a capacitive button next to your temple, the hidden human cost of data labeling has taken on a whole new meaning. It's a reality Meta would much prefer to bury in lengthy terms of service that likely only a handful will take the time to read. "You think that if they knew about the extent of the data collection, no one would dare to use the glasses," one annotator told the newspapers.
Share
Share
Copy Link
Meta Ray-Ban smart glasses are transmitting sensitive user footage to offshore human data annotators in Kenya, according to a Swedish investigation. Workers report viewing people nude, using toilets, engaging in sexual activity, and exposing credit card information. The revelations highlight serious privacy concerns and a lack of transparency about how Meta processes AI-captured data, despite Europe's GDPR requirements.
Meta AI glasses users may be unknowingly sharing intimate videos and sensitive financial information with human moderators located outside Europe, according to a joint investigation by Swedish newspapers Svenska Dagbladet and GΓΆteborgs-Posten released last week
1
. Whistleblowers working as human data annotators for offshore contractors in Kenya told journalists they routinely view deeply private footage, including people nude, using the toilet, engaging in sexual activity, and credit card numbers2
.
Source: 9to5Mac
The Meta Ray-Ban smart glasses, which sold over seven million pairs in 2025βa significant jump from the two million sold in 2023 and 2024 combinedβallow users to record point-of-view footage and interact with a Meta AI assistant
3
. However, using these AI features requires agreeing to Meta's Terms of Service that permit human review of private data captured through the device1
.Contractors working for Sama in Nairobi described uncomfortable working conditions where they're expected to annotate visual data without questioning the content. "In some videos you can see someone going to the toilet, or getting undressed," one contractor told the Swedish newspapers. "I don't think they know, because if they knew they wouldn't be recording"
3
. Another data annotator recounted seeing "a video where a man puts the glasses on the bedside table and leaves the room. Shortly afterwards his wife comes in and changes her clothes"3
.
Source: Engadget
The sharing intimate videos practice is part of data labeling for AI model training, a resource-intensive preprocessing step where human moderators manually identify and annotate objects in footage to help large language models understand visual data
1
. While former Meta employees say sensitive data isn't supposed to be sent for human review, this relies on algorithmic identification that isn't always successful2
.The investigation reveals a troubling lack of transparency about what footage Meta captures and when. When users ask the AI assistant a questionβsuch as identifying a car's make and modelβit remains unclear how long the glasses continue recording afterward
2
. Meta's privacy policy for wearable products proved difficult to access, requiring reporters to "jump through some hoops" to view it1
.
Source: Futurism
The data collection practices raise questions about GDPR compliance, as Europe's data protection regulations require transparency about how personal data is processed, according to a data protection lawyer cited in the report
1
. Non-profit None Of Your Business data protection lawyer Kleanthi Sardeli warned that "once the material has been fed into the models, the user in practice loses control over how it is used"3
.Related Stories
Meta's AI terms of use state that "in some cases, Meta will review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review can be automated or manual (human)"
2
. The document also warns users not to share information "you don't want the AIs to use and retain, such as information about sensitive topics"3
. However, the sensitive user footage being reviewed suggests many users lack awareness of these terms.One employee told reporters they felt forced to annotate the material or risk losing their job. "You understand that it is someone's private life you are looking at, but at the same time you are just expected to carry out the work. You are not supposed to question it. If you start asking questions, you are gone"
3
. This echoes broader concerns about content moderation exploitation in developing countries, where workers review gruesome crime scene images and other disturbing material for tech companies3
.After two months of no replies, Meta declined to comment directly on the story, simply stating that "when live AI is being used, we process that media according to the Meta AI Terms of Service and Privacy Policy"
1
. One annotator summarized the situation bluntly: "You think that if they knew about the extent of the data collection, no one would dare to use the glasses" [3](https://futurism.com/artificial-intelligence/meta-disturbing-smart-glasses">.Summarized by
Navi
01 Oct 2024β’Technology

24 Apr 2025β’Technology

13 Feb 2026β’Technology

1
Business and Economy

2
Policy and Regulation

3
Technology
