Meta AI glasses send intimate videos to human moderators, raising major privacy concerns

Reviewed byNidhi Govil

3 Sources

Share

Meta Ray-Ban smart glasses are transmitting sensitive user footage to offshore human data annotators in Kenya, according to a Swedish investigation. Workers report viewing people nude, using toilets, engaging in sexual activity, and exposing credit card information. The revelations highlight serious privacy concerns and a lack of transparency about how Meta processes AI-captured data, despite Europe's GDPR requirements.

Meta AI Glasses Transmit Sensitive User Footage to Offshore Workers

Meta AI glasses users may be unknowingly sharing intimate videos and sensitive financial information with human moderators located outside Europe, according to a joint investigation by Swedish newspapers Svenska Dagbladet and GΓΆteborgs-Posten released last week

1

. Whistleblowers working as human data annotators for offshore contractors in Kenya told journalists they routinely view deeply private footage, including people nude, using the toilet, engaging in sexual activity, and credit card numbers

2

.

Source: 9to5Mac

Source: 9to5Mac

The Meta Ray-Ban smart glasses, which sold over seven million pairs in 2025β€”a significant jump from the two million sold in 2023 and 2024 combinedβ€”allow users to record point-of-view footage and interact with a Meta AI assistant

3

. However, using these AI features requires agreeing to Meta's Terms of Service that permit human review of private data captured through the device

1

.

Privacy Concerns Mount Over Human Review of Private Data

Contractors working for Sama in Nairobi described uncomfortable working conditions where they're expected to annotate visual data without questioning the content. "In some videos you can see someone going to the toilet, or getting undressed," one contractor told the Swedish newspapers. "I don't think they know, because if they knew they wouldn't be recording"

3

. Another data annotator recounted seeing "a video where a man puts the glasses on the bedside table and leaves the room. Shortly afterwards his wife comes in and changes her clothes"

3

.

Source: Engadget

Source: Engadget

The sharing intimate videos practice is part of data labeling for AI model training, a resource-intensive preprocessing step where human moderators manually identify and annotate objects in footage to help large language models understand visual data

1

. While former Meta employees say sensitive data isn't supposed to be sent for human review, this relies on algorithmic identification that isn't always successful

2

.

Lack of Transparency and GDPR Compliance Questions

The investigation reveals a troubling lack of transparency about what footage Meta captures and when. When users ask the AI assistant a questionβ€”such as identifying a car's make and modelβ€”it remains unclear how long the glasses continue recording afterward

2

. Meta's privacy policy for wearable products proved difficult to access, requiring reporters to "jump through some hoops" to view it

1

.

Source: Futurism

Source: Futurism

The data collection practices raise questions about GDPR compliance, as Europe's data protection regulations require transparency about how personal data is processed, according to a data protection lawyer cited in the report

1

. Non-profit None Of Your Business data protection lawyer Kleanthi Sardeli warned that "once the material has been fed into the models, the user in practice loses control over how it is used"

3

.

User Awareness and Content Moderation Exploitation

Meta's AI terms of use state that "in some cases, Meta will review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review can be automated or manual (human)"

2

. The document also warns users not to share information "you don't want the AIs to use and retain, such as information about sensitive topics"

3

. However, the sensitive user footage being reviewed suggests many users lack awareness of these terms.

One employee told reporters they felt forced to annotate the material or risk losing their job. "You understand that it is someone's private life you are looking at, but at the same time you are just expected to carry out the work. You are not supposed to question it. If you start asking questions, you are gone"

3

. This echoes broader concerns about content moderation exploitation in developing countries, where workers review gruesome crime scene images and other disturbing material for tech companies

3

.

After two months of no replies, Meta declined to comment directly on the story, simply stating that "when live AI is being used, we process that media according to the Meta AI Terms of Service and Privacy Policy"

1

. One annotator summarized the situation bluntly: "You think that if they knew about the extent of the data collection, no one would dare to use the glasses" [3](https://futurism.com/artificial-intelligence/meta-disturbing-smart-glasses">.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Β© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo