5 Sources
[1]
Meta cuts contractors who reported seeing Ray-Ban Meta users have sex
In February, numerous workers from a company that Meta contracted to perform data annotation for Ray-Ban Meta reported viewing sensitive, embarrassing, and seemingly private footage recorded by the smart glasses. About two months later, Meta ended its contract with the firm. According to a BBC report today, "less than two months" after a report from Swedish newspapers Svenska Dagbladet and Göteborgs-Posten and Kenya-based freelance journalist Naipanoi Lepapa came out featuring Sama workers complaining about watching explicit footage shot from Ray-Ban Metas, "Meta ended its contract with Sama." Sama is a Kenya-headquartered firm that Meta contracted to perform data annotation work, including working with video, image, and speech annotation for Meta's AI systems for Ray-Ban Metas. Sama claims that Meta's cancellation of the contract affected 1,108 workers. A Meta spokesperson told BBC that Meta "decided to end our work with Sama because they don't meet our standards." Ars Technica reached out to Meta asking how, specifically, Sama failed to meet Meta's expectations and will update this article if we hear back. Ars has also reached out to Sama. BBC reported that Sama workers believe Meta ended the contract because workers spoke out about seeing Ray-Ban Meta-shot footage of people performing personal acts, like changing their clothes, having sex, and using the toilet. In a statement shared with BBC, Sama said: "Sama has consistently met the operational, security, and quality standards required across our client engagements, including with Meta. At no point were we notified of any failure to meet those standards, and we stand firmly behind the quality and integrity of our work." In February's report, an anonymous Sama employee was quoted as saying, per a machine translation, they "are just expected to carry out the work" even when viewing private footage. After Sama workers told journalists that they had watched private footage that appeared to be recorded unbeknownst to glasses owners, Meta responded by halting business with Sama, a spokesperson said, per BBC's report today. "Last month, we paused our work with Sama while we looked into these claims," the spokesperson said. "We take them seriously. Photos and videos are private to users. Humans review AI content to improve product performance, for which we get clear user consent." BBC said that Meta has not responded to allegations that it cut ties with Sama because the workers spoke out. Ray-Ban Meta scrutinized after Sama workers' claims In response to the February report, Meta confirmed that it sometimes shares content that glasses owners provide to the Meta AI generative AI chatbot with contractors so that the contractors can review data with "the purpose of improving people's experience." The company said that such "data is first filtered to protect people's privacy," such as by blurring out faces in pictures. Ray-Ban Metas show a light when taking photos or recording a video; however, Sama workers said in February that it appeared that some users remained unaware that their glasses were recording sometimes. "People can record themselves in the wrong way and not even know what they are recording," an anonymous employee was quoted as saying, per a machine translation of the Swedish newspapers. Since Sama workers' claims became public, Ray-Ban Meta glasses have faced extra scrutiny. In March, a class-action complaint was filed in the US District Court for the Northern District of California, San Francisco Division [PDF] against Meta and Luxottica of America, a subsidiary of EssilorLuxottica, which is Ray-Ban's parent company. The complaint accuses Meta of breaking state consumer protection laws and seeks damages, punitive penalties, and an injunction requiring Meta to make changes in order "to prevent or mitigate the risk of the consumer deception and violations of law." That same month, the UK Information Commissioner's Office (ICO) said it would send Meta a letter about the Sama workers' "concerning" reports. The data watchdog told BBC at the time that "devices processing personal data, including smart glasses, should put users in control and provide appropriate transparency." "Service providers must clearly explain what data is collected and how it is used," the ICO's statement said. The office of Kenya's Data Protection Commissioner said in March that it was investigating "privacy concerns raised in relation to the Ray-Ban Meta glasses and the processing of personally identifiable information for the training of Meta AI."
[2]
Meta contractor fires 1,100 AI trainers after they revealed Ray-Ban glasses recorded private and intimate footage
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. Recap: One of the earliest controversies to emerge from the generative AI boom centered on a troubling revelation: much of the infrastructure underpinning these systems was built on the labor of contractors in Kenya earning startlingly low wages. That issue has now collided with a separate set of concerns surrounding Meta's smart glasses, leaving the company in hot water after more than 1,000 contractors lost their jobs. Meta has quietly ended its relationship with a vendor that helped train its generative AI systems using footage captured through Ray-Ban smart glasses. The contractor, Sama, subsequently announced the termination of 1,108 employees - some of whom alleged they were punished after coming forward about the sensitive nature of the footage they were asked to review. The story broke in February, when workers at the Nairobi division of California-based outsourcing firm Sama told two Swedish newspapers that their assignments involved labeling footage from smart glasses that appeared to show subjects who had no idea they were being recorded. The glasses include an AI assistant that requires recording audio and video, some of which becomes AI training data. Human contractors scan and label material that AI struggles with. Meta states that its terms of service explain these details, and the glasses require explicit user permission to engage AI mode. Nonetheless, Sama employees reported that the glasses recorded banking information, private conversations, people naked in bathrooms, and intimate encounters. Following the report, Meta cancelled its contract, saying that Sama did not meet its standards. In response, Sama said it never received any indication that its work was substandard. Meanwhile, employees reported being forced to work without performing any tasks amid tightening security as Sama tries to uncover the whistleblowers. Sama is the same firm that OpenAI contracted to train ChatGPT, leading up to its 2022 debut. To make the chatbot less toxic, Kenya-based workers were paid less than $2 a day to filter distressing content with documented effects on their mental health. That year, Meta and Sama also faced allegations of using misleading job listings amounting to human trafficking, and of laying off workers who attempted to unionize. The recording capabilities of smart glasses have drawn criticism since Google Glass met sharp public resistance years ago. Since Meta revived the category with more discreet hardware, wearers have been documented using the devices inside courtrooms, during police operations, and in classrooms while examinations were underway. The recording capabilities of smart glasses have attracted criticism ever since Google Glass met sharp public resistance years ago. Since Meta revived the category with more discreet hardware, wearers have been documented using the devices inside courtrooms, during police operations, and in classrooms during exams. Apple is reportedly testing up to four smart glass designs to compete with Meta.
[3]
Dispute over fate of Kenyan workers who saw Meta AI glasses films
Meta is under pressure to explain why it cancelled a major contract with a company it was using to train AI, shortly after some of its Kenya-based workers alleged they had to view graphic content captured by Meta smart glasses. In February, workers at the company, Sama, told two Swedish newspapers they had witnessed glasses users going to the toilet and having sex. Less than two months later, Meta ended its contract with Sama, which Sama said would result in 1,108 workers being made redundant. Meta says it's because Sama did not meet its standards, a criticism Sama rejects. A Kenyan workers' organisation alleges Meta's decision was caused by the staff speaking out. Meta has not addressed that allegation but told BBC News in a statement it had "decided to end our work with Sama because they don't meet our standards". Sama has defended its work. "Sama has consistently met the operational, security and quality standards required across our client engagements, including with Meta," it said in a statement. "At no point were we notified of any failure to meet those standards, and we stand firmly behind the quality and integrity of our work." In late February, Swedish newspapers Svenska Dagbladet (SvD) and Goteborgs-Posten (GP) published an investigation which included the accounts of unnamed workers who had been asked to review videos filmed by Meta's glasses. "We see everything - from living rooms to naked bodies," one worker reportedly said. At the time of the publication, Meta admitted subcontracted workers might sometimes review content filmed on its smart glasses when people shared it with Meta AI. It said this was for the purpose of improving the customer experience, and was a common practice among other companies. However, the revelations have prompted regulators to act. Shortly after the Swedish investigation, the UK data watchdog, the Information Commissioners Office (ICO) wrote to Meta about what it called a "concerning" report. The Office of the Data Protection Commissioner in Kenya also announced it was commencing an investigation into privacy concerns raised by the glasses. In a statement in response to news of the redundancies a Meta spokesperson told the BBC, "last month, we paused our work with Sama while we looked into these claims. "We take them seriously. Photos and videos are private to users. Humans review AI content to improve product performance, for which we get clear user consent." In September Meta unveiled a range of AI-powered glasses in partnership with brands Ray-Ban and Oakley. Features can include translating text, or responding to questions about what the user is looking at - particularly useful for those who are blind or partially sighted. However, as the devices have grown in popularity, so too have concerns about their misuse. The workers the Swedish newspapers spoke to were data annotators, teaching Meta's AI to interpret images by manually labelling content. The workers said they also reviewed transcripts of interactions with the AI to check it had answered questions adequately. In one instance, a worker told the newspapers, a man's glasses were left recording in a bedroom where they later filmed a woman, apparently the man's wife, undressing. Meta's glasses have a light in the corner of the frames that is turned on when the built-in camera is recording. But misuse of the glasses has also been linked to non-consensual recording of women in Kenya. Sama, a US headquartered outsourcing business, which began as a non-profit organisation with the aim of increasing employment through the provision of tech jobs, is now an "ethical" B-corp. But this is not the first time a contract with Meta has soured. An earlier deal to moderate Facebook posts attracted criticism, alongside legal action by former employees - some of whom described being exposed to graphic, traumatising content. Sama later said it regretted taking the work. Naftali Wambalo of the Africa Tech Workers Movement, who is a petitioner in the continuing legal action around that case, told the BBC he had also spoken with workers involved in the smart glasses contract. Wambalo believed the reason for Meta's ending the work was that it didn't want workers speaking out about human workers sometimes reviewing content captured by the smart glasses. "What I think are the standards they are talking about here are standards of secrecy," he told BBC News. The BBC has asked Meta to respond to this point. The tech giant has previously said that users were made aware of the possibility of human review in the its terms of service. Mercy Mutemi a lawyer representing the petitioners, who is also executive director of campaign group the Oversight Lab, said Meta's statement should be a warning to the Kenyan government. "We've been told that this is our entry route into the AI ecosystem," she told the BBC. "This is a very flimsy foundation to build your entire industry on." Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.
[4]
Meta ends Sama contract after Kenyan workers report seeing intimate footage from Ray-Ban smart glasses users
In February 2026, workers at Sama, a Nairobi-based outsourcing company contracted by Meta, told Swedish newspapers Svenska Dagbladet and Göteborgs-Posten that they had been reviewing footage captured by users of Meta's Ray-Ban smart glasses. The footage included people having sex, going to the toilet, undressing, and handling bank details. The workers' job was to label the content so that Meta's AI systems could learn to interpret what the glasses see. Less than two months after the investigation was published, Meta ended its contract with Sama, and on 16 April the company issued formal redundancy notices to 1,108 employees. Meta said Sama "don't meet our standards." Sama rejected the characterisation and said it had received no notification of any failure. Naftali Wambalo, co-founder of the Africa Tech Workers Movement, alleged the real reason was simpler: Meta was retaliating against the workers who spoke out. Meta has not responded to that allegation. The people who trained the AI saw what the glasses see. Then they lost their jobs. Meta sold more than seven million pairs of Ray-Ban smart glasses in 2025, more than tripling its previous year's volume. The product line has since expanded to include prescription models designed to reach the billions of people who already buy corrective eyewear, converting what was a novelty into something closer to a default. The glasses record video, capture photos, stream audio, and route queries through Meta AI, which processes images and voice commands either on-device or in the cloud. A small LED on the frames illuminates when the camera is active, which Meta has described as a privacy safeguard. The light is designed for the people around the wearer, not for the wearer themselves. It tells strangers that they are being recorded. It does not tell them that the recording may be reviewed by a human being in a different country, sitting at a desk in Nairobi, labelling what they see so that an algorithm can learn the difference between a kitchen and a bedroom, a handshake and an embrace, a document and a face. Meta's privacy policy for the glasses states that users who opt into sharing data for AI training purposes allow their footage to be processed by the company's AI systems. The policy does not dwell on the human layer between the camera and the algorithm. AI training data does not label itself. Before a model can learn to interpret a scene, a person must first watch the scene and describe it. The Swedish investigation revealed what that process looks like in practice: workers in Kenya, employed by a third-party contractor, viewing the most private moments of strangers' lives, cataloguing them, and moving on to the next clip. The footage was not anonymised before review. The workers could see faces, bodies, and personal documents. They had no way to contact the people being filmed, no mechanism to flag footage they believed had been captured without consent, and no authority to refuse the work without risking their employment. Sama was founded in 2008 as a social enterprise with the stated mission of providing dignified digital work to people in low-income communities. The company has operations in Kenya, Uganda, and India, and has provided data annotation services to some of the largest technology companies in the world, including Google, Microsoft, and Meta. The contract with Meta for smart glasses data annotation was one of several Sama held with the company. Workers were tasked with labelling images and video captured by the glasses to train Meta's AI models, a process that required them to view, categorise, and describe whatever the cameras had recorded. The Swedish investigation, published in late February 2026, reported that workers described seeing users engaged in sexual activity, using the toilet, undressing, and displaying financial information on screen. The content was not exceptional. It was the ordinary residue of a camera worn on someone's face throughout the day, capturing whatever the wearer happened to be looking at. The workers told the journalists that the experience was distressing but that they had limited options: the work paid better than most available alternatives, and Sama's contracts typically included non-disclosure agreements that discouraged public discussion of the content they reviewed. When the Swedish publications broke the story, they gave the workers a voice they had not previously been permitted to use. On 16 April, less than seven weeks after the investigation was published, Sama notified 1,108 employees that their positions were being made redundant. The workers received six days' notice. Meta's statement attributed the termination to Sama's failure to meet its standards, but declined to specify which standards had been breached or when the assessment was made. Sama said it was "surprised and disappointed" by Meta's decision and that it had not been informed of any performance shortfalls prior to the termination. The timing was noted by labour advocates, regulators, and the workers themselves. Wambalo, whose organisation represents data workers across the continent, described Meta's reasoning as a cover for retaliation: the company, he said, was enforcing "standards of secrecy" rather than standards of quality. This is not the first time Sama's relationship with Meta has ended in controversy. Between 2019 and 2023, Sama employed content moderators in Nairobi who reviewed posts flagged as potentially violating Facebook's community standards. The work required moderators to view graphic violence, sexual abuse, hate speech, and other disturbing material for hours each day, often at wages as low as $1.50 per hour. A 2022 investigation by Time magazine found that 81 per cent of 144 Sama content moderators who underwent clinical assessment were diagnosed with "severe" or "extremely severe" symptoms of post-traumatic stress disorder. Former workers filed lawsuits in Kenya alleging that Sama and Meta had subjected them to conditions amounting to human trafficking and had interfered with their attempts to form a union. Sama later said publicly that it "regretted" taking on the content moderation work, and exited the business in 2023 to focus on what it described as less harmful data annotation services. The smart glasses contract was supposed to be different. Data annotation, labelling images and video to train AI, is generally considered less traumatic than content moderation, which requires workers to confront the worst material humans produce. But the Swedish investigation revealed that the distinction depends entirely on what the AI is being trained to see. When the AI is attached to a camera worn on someone's face throughout the day, the training data is their life. The workers who labelled Meta's smart glasses footage were not reviewing content that users had chosen to upload to a platform. They were reviewing content that a camera had passively captured, often without the knowledge or meaningful consent of the people being filmed. The nature of the work had changed, but the structural dynamic had not: a Silicon Valley company outsourcing the human cost of its AI ambitions to workers in East Africa who lack the bargaining power to set the terms of their own labour. The regulatory and legal response has been swift by the standards of technology enforcement. The UK Information Commissioner's Office wrote to Meta in early March, calling the Swedish report "concerning" and requesting information about how data captured by the glasses is processed, stored, and reviewed. The Office of the Data Protection Commissioner in Kenya announced an investigation into whether the glasses' data collection practices comply with Kenyan data protection law. In the United States, the Clarkson Law Firm filed a class action lawsuit on behalf of consumers, alleging that Meta engaged in false advertising by marketing the glasses as "designed for privacy, controlled by you" while routing user footage through a human review pipeline in a country with weaker data protection enforcement than the markets where the glasses are sold. The Electronic Frontier Foundation published an advisory titled "Think Twice Before Buying or Using Meta's Ray-Bans," warning that the glasses' AI features allow "all parts of their life to be recorded, and then reviewed, either by the AIs or by humans behind it." Privacy complaints against Meta for using personal data to train AI have been mounting across the European Union, where noyb filed 11 simultaneous complaints with national data protection authorities alleging that Meta's AI training practices violate the General Data Protection Regulation. The complaints focus on Meta's decision to process user data under a "legitimate interest" basis rather than seeking explicit consent. The smart glasses controversy adds a physical dimension to what had been a largely digital dispute: it is one thing to train AI on posts users wrote on Facebook, and another to train it on footage of people in their bedrooms, captured by a device and reviewed by a stranger. Meta has argued that European privacy regulations are "stifling" AI innovation and that pre-emptive regulation of "theoretical harms" will prevent European businesses from benefiting from AI advances. The harms documented by the Swedish investigation are not theoretical. They are workers in Nairobi who watched strangers undress and were then told their jobs no longer existed. Meta's AI ambitions require an enormous volume of human-labelled training data. The company is building an AI clone of Mark Zuckerberg for its employees, developing the Muse Spark model to power its platforms, and expanding the glasses' AI capabilities to include real-time visual understanding, object identification, and conversational assistance. Each of these products depends on the same pipeline: humans look at data, describe what they see, and their descriptions become the instructions that teach the model what the world looks like. When that pipeline involves a contractor, the humans become invisible. They do not appear in Meta's product announcements, earnings calls, or marketing materials. They appear only when something goes wrong, when a Swedish newspaper publishes an investigation, or when a contractor breach exposes the fragility of the training operation. Mercy Mutemi, the Kenyan human rights lawyer who leads the Oversight Lab, told the BBC that the pattern of outsourcing AI's human costs to East African workers represents a structural failure, not an aberration. "This is a very flimsy foundation to build your entire industry on," she said. The industry she is describing is worth trillions of dollars. The foundation she is describing is a workforce paid data annotation wages in Nairobi, given six days' notice when the contract ends, and prevented by non-disclosure agreements from telling anyone what they saw. Meta's smart glasses are designed for privacy, controlled by the user. The question the Swedish investigation answered is which user: the person wearing the glasses, or the person in Nairobi who watched the footage and lost their job for talking about it.
[5]
Meta's creepiest lawsuit in recent years will make you rethink its AI smart glasses
Over 1,100 Kenyan workers lost their jobs after blowing the whistle on Meta's smart glasses content. Meta's Ray-Ban smart glasses are at the center of yet another controversy. A Kenyan AI training firm called Sama, which Meta used to help train its AI, saw its contract abruptly terminated shortly after its workers came forward with deeply troubling allegations (via BBC). The workers claim they were repeatedly exposed to graphic content captured through Meta's glasses, and now more than a thousand of them have lost their jobs. The disturbing footage behind Meta's AI training Sama's workers were data annotators, a role that involves manually labeling video content to teach Meta's AI how to interpret images. They also reviewed transcripts of Meta AI conversations to make sure the chatbot was giving accurate responses. Recommended Videos What they didn't sign up for, allegedly, was reviewing footage of people having sex or using the toilet, all filmed through Meta's glasses without users' knowledge. In one account, a man's glasses were left recording in a bedroom, capturing his wife undressing. Meta's glasses do have a small indicator light that turns on when the camera is active, though that clearly hasn't prevented misuse. The company admitted that contracted workers may occasionally review content shared with Meta AI, framing it as standard practice for improving user experience. Why did Meta pull the contract? Less than two months after those accounts surfaced, Meta terminated its agreement with Sama, leaving 1,108 workers without jobs. Sama says it met every standard Meta required and was never told otherwise. However, Meta disagrees, saying Sama fell short of its expectations. A Kenyan workers' organization believes the real reason was to silence staff who had gone public about humans reviewing smart glasses footage. The UK's Information Commissioner's Office called the situation "concerning" in a letter to Meta. Additionally, Kenya's data protection authority opened a formal investigation. This isn't Sama's first difficult encounter with Meta. An earlier Facebook content moderation contract ended in similar controversy, with former employees describing exposure to traumatizing content. Sama later said it wished it had never taken that work on. With regulators now circling and a legal case ongoing, the pressure on Meta to explain its decision is only growing. Meta's smart glasses have a much bigger privacy problem Meta's smart glasses are moving deeper into controversy as reports suggest they could soon identify people in real time. That has intensified privacy and civil rights concerns around face recognition in everyday public spaces. Civil rights groups are pushing back against the idea citing that always-on identification could happen without clear consent. Apps like Godsend are emerging in response to that threat, warning people when nearby smart glasses might be secretly recording them. That shows how uneasy people have become about being filmed without knowing it. The technology is also showing up in less flattering ways, including reports of students using smart glasses to cheat in exams. That has added a new layer to the debate around misuse. That said, it's not all bad. The glasses have found genuinely good uses too, particularly in helping visually impaired people navigate spaces with assistance from strangers.
Share
Copy Link
Meta abruptly terminated its contract with Kenya-based firm Sama, affecting 1,108 workers who had reported viewing sensitive footage captured by Ray-Ban smart glasses users. The workers alleged they saw people having sex, using toilets, and undressing—footage that appeared to be recorded without users' knowledge. The termination came less than two months after whistleblowers went public, prompting regulatory investigations and a class-action lawsuit against Meta.
Meta has ended its relationship with Sama, a Kenya-headquartered data annotation firm, less than two months after workers reported viewing intimate footage from Ray-Ban smart glasses users
1
. The abrupt termination affected 1,108 Kenyan AI trainers who were tasked with labeling video content to improve Meta's AI systems2
. In February, workers at Sama told Swedish newspapers Svenska Dagbladet and Göteborgs-Posten that they had witnessed glasses users going to the toilet, having sex, undressing, and handling banking information—all apparently recorded without the subjects' knowledge3
.
Source: Ars Technica
Meta stated it "decided to end our work with Sama because they don't meet our standards," though the company has not specified which standards were breached
1
. Sama firmly rejected this characterization, saying it "consistently met the operational, security, and quality standards required across our client engagements, including with Meta" and was never notified of any failure to meet those standards3
. The workers received just six days' notice before their positions were made redundant4
.Naftali Wambalo of the Africa Tech Workers Movement alleged that Meta's real motivation was retaliation against whistleblowers who spoke out about the disturbing content they were forced to review. "What I think are the standards they are talking about here are standards of secrecy," Wambalo told the BBC
3
. Workers reported being forced to continue working without performing any tasks amid tightening security as Sama attempted to uncover who had spoken to journalists2
.The data annotation work required human reviewers to manually label AI training data before Meta's algorithms could learn to interpret scenes captured by the AI-powered smart glasses
4
. One anonymous employee told Swedish newspapers that "people can record themselves in the wrong way and not even know what they are recording"1
. In one instance, a man's glasses were left recording in a bedroom where they later filmed a woman, apparently his wife, undressing3
.
Source: BBC
The revelations have sparked investigations by data protection authorities in multiple jurisdictions. The UK's Information Commissioner's Office (ICO) sent Meta a letter calling the reports "concerning" and emphasizing that "devices processing personal data, including smart glasses, should put users in control and provide appropriate transparency"
1
. The Office of the Data Protection Commissioner in Kenya launched an investigation into privacy concerns raised in relation to the Ray-Ban Meta glasses and the processing of personally identifiable information for training Meta AI1
.In March, a class-action lawsuit was filed in the US District Court for the Northern District of California against Meta and Luxottica of America, a subsidiary of Ray-Ban's parent company EssilorLuxottica
1
. The complaint accuses Meta of breaking state consumer protection laws and seeks damages, punitive penalties, and an injunction requiring changes to prevent consumer deception and violations of law1
.Meta confirmed that it sometimes shares content that glasses owners provide to Meta AI with contractors for review, stating this occurs "with the purpose of improving people's experience" and that "data is first filtered to protect people's privacy," such as by blurring faces in pictures
1
. However, the footage was not anonymized before review, and workers could see faces, bodies, and personal documents4
. They had no way to contact the people being filmed, no mechanism to flag footage they believed had been captured without user consent, and no authority to refuse the work without risking their employment4
.This isn't Sama's first contentious contract with Meta. An earlier deal involving content moderation for Facebook attracted criticism and legal action from former employees who described being exposed to graphic, traumatizing content
3
. Sama later said it regretted taking that work3
. The company, which began as a non-profit organization aimed at increasing employment through tech jobs, now operates as an "ethical" B-corp3
.Related Stories
Meta sold more than seven million pairs of Ray-Ban smart glasses in 2025, more than tripling its previous year's volume
4
. The product line has expanded to include prescription models designed to reach the billions of people who already buy corrective eyewear4
. The glasses record video, capture photos, stream audio, and route queries through Meta AI, which processes images and voice commands either on-device or in the cloud4
.Source: TechSpot
While Ray-Ban Meta glasses show a light when taking photos or recording video, workers reported that some users remained unaware their glasses were recording
1
. The small LED on the frames is designed for people around the wearer, not for the wearer themselves, and does not indicate that recordings may be reviewed by human workers in different countries4
. Reports suggest the glasses could soon identify people in real time using face recognition, intensifying privacy and civil rights concerns5
. Apps like Godsend are emerging to warn people when nearby smart glasses might be secretly recording them5
.Mercy Mutemi, a lawyer representing petitioners in ongoing legal action and executive director of the Oversight Lab, warned that Meta's statement should serve as a cautionary signal to the Kenyan government. "We've been told that this is our entry route into the AI ecosystem," she told the BBC. "This is a very flimsy foundation to build your entire industry on"
3
. The case highlights ongoing questions about labor practices in AI development and the ethical implications of data collection practices that rely on low-wage workers in developing countries to review sensitive material without adequate protections or transparency about user consent.Summarized by
Navi
[4]
03 Mar 2026•Policy and Regulation

16 Mar 2026•Policy and Regulation

01 Oct 2024•Technology

1
Policy and Regulation

2
Entertainment and Society

3
Technology
