The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Thu, 6 Mar, 12:03 AM UTC
2 Sources
[1]
Meta's UK Facial Recognition Approved, Critics Warn of Chinese-Style Surveillance
Meta confirmed its facial recognition tech will be launched in the U.K. and the EU. Credit: Pexels. On Tuesday, March 4, Meta confirmed that its Facebook and Instagram facial recognition tech to spot scam celebrity adverts will be launched in the U.K. and the EU. Having successfully worked with regulators to allow the feature, Meta's move has highlighted Britain's continued concerns surrounding widespread surveillance. Meta Brings Facial Recognition to the U.K. Beginning testing in 2023, Meta's facial recognition technology is able to identify scams involving well-known figures. Once identified, the firm's facial recognition tools compare the imagery in the advertisement against real photos from the celebrity's official profiles. Meta said the technology combats the rising issue of celebrity deepfakes, where advertisements make it look like a celebrity is endorsing a product. Meta said that facial recognition will also allow celebrities to access their accounts if they are compromised. Users will be able to submit video selfies, which the technology will analyze to prove their identity. "We're constantly working on new ways to keep people safe while keeping bad actors out, and the measures we're rolling out this week utilize facial recognition technology to help us crack down on fake celebrity scams," said David Agranovich, Facebook's Director for Global Threat Disruption. Chinese Surveillance China's approach to surveillance has evolved into one of the most comprehensive and technologically advanced systems globally. An estimated 700 million cameras are installed in the country, many of which are fitted with sophisticated facial recognition technology. A mix of internet surveillance, data analysis and extensive facial capture means Chinese individuals are constantly being watched. Surveillance data feeds into government platforms that utilize AI to analyze and predict potential threats, leading to actions such as detentions based on behavioral patterns. Critics within the U.K. and U.S. have raised concerns about the inclusion of this style of surveillance in the West. In 2023, the U.K. government was accused of using King Charles' Coronation to stage the country's largest-ever facial recognition operation. The event, which brought hundreds of thousands of people onto London's streets, was captured by a new suite of Chinese-made Hikvision AI cameras. These high-tech cameras, which are widely used across Britain, can scan tens of thousands of people's faces at a time. "The use of such surveillance technology means that hundreds of thousands of people were not only part of a once-in-a-generation, historic event but part of a high-tech police line-up," Big Brother Watch Director Silkie Carlo wrote in The Times . "Some are reassured that the police are not, yet, using the AI technology to record your identity if you're not of interest to them, and that the highly sensitive biometric data the algorithm takes from your face is soon discarded unless you're flagged," Carlo added. "The notion that if one has 'nothing to hide' one has 'nothing to fear' from intrusion has never fit in a free country," Carlo concluded. In 2023, amid rising tensions between China and the West, the U.K. announced it was removing Hikvision cameras and other Chinese-made surveillance equipment from sensitive areas. The government defines 'sensitive' sites as locations that routinely handle secret material, house officials with high-level security clearances, or are frequently used by ministers. Facial Recognition Tech Growing Across U.K. The U.K. has experienced a series of new developments surrounding facial recognition that have raised further concerns from privacy advocates. Last week, the government proposed a bill that would give law enforcement access to every driver's data held by the Driver and Vehicle Licensing Agency. This follows a previous Conservative government bill that proposed allowing police to check burglars and shoplifters caught on camera through other databases using facial recognition tech. Madeleine Stone, the senior advocacy officer at Big Brother Watch, a non-profit that campaigns against the use of facial recognition in the U.K., said the bill would put innocent citizens at risk. "It's disturbing to see the Government is reheating the Conservative's abandoned plans that most threaten privacy rights, including granting all police forces access to our driving license photos, opening the door to the creation of a massive facial recognition database," Stone told The Telegraph . "Not only would this be an unprecedented breach of privacy, but would also put innocent citizens at risk of misidentifications and injustice," she added. In 2024, the Metropolitan Police escalated its use of live facial recognition systems. LFR was reportedly deployed 117 times between January and August, a substantial increase from 32 deployments over the previous four years. This surge led to the scanning of around 770,966 faces, resulting in over 360 arrests. Despite rising criticism, a 2023 YouGov poll found that 57% of British adults backed law enforcement using live facial recognition technology in public spaces, with only 28% opposing it. U.K. Recommends Facial Recognition to Social Media Companies Social media platforms will also soon be urged by the British communications watchdog to deploy "highly accurate" facial recognition checks to stop underage children from accessing their platforms. In December, Ofcom announced it would recommend that social media companies use facial technology to determine a user's age in guidance due to be published in April. Ofcom's head of online safety policy, Jon Higham, told The Telegraph that it "doesn't take a genius to work out that children are going to lie about their age." "We will expect the technology to be highly accurate and effective. We're not going to let people use poor or substandard mechanisms to verify kids' ages. "The sort of thing that we might look to in that space is some of this facial age and estimation technology that we see companies bringing in now, which we think is really pretty good at determining who is a child and who is an adult."
[2]
Meta brings its anti-fraud facial recognition test to the UK after getting a thumbs up from regulators | TechCrunch
Last October, Meta dipped its toe into the world of facial recognition -- an area where it has had a tricky track record -- with an international test of two new tools: one to stop scams based on likenesses of famous people, and a second facial recognition feature to help people get back into compromised Facebook or Instagram accounts. Now, that test is expanding to one more notable country. After initially keeping its facial recognition test off in the United Kingdom, Meta on Wednesday began to roll both of the tools there, too. And in other countries where the tools have already launched, the "celeb bait" protection is being extended to more people, the company said. Meta said it got the green light in the U.K. after "after engaging with regulators" in the country -- which itself has doubled down on embracing AI. No word yet on Europe, the other key region where Meta has yet to launch the facial recognition tool 'test'. "In the coming weeks, public figures in the U.K. will start seeing in-app notifications letting them know they can now opt-in to receive the celeb-bait protection with facial recognition technology," a statement from the company said. Both this and the new "video selfie verification" that all users will be able to use will be optional tools, Meta said. Meta has a long history of tapping user data to train its algorithms, but when it first rolled out the two new facial recognition tests in October 2024, the company said the features were not being used for anything other than the purposes described: fighting scam ads and user verification. "We immediately delete any facial data generated from ads for this one-time comparison regardless of whether our system finds a match, and we don't use it for any other purpose," wrote Monika Bickert, Meta's VP of content policy in a blog post (which is now updated with the detail about the U.K. expansion). The developments, however, come at a time when Meta is going all-in on AI in its business. In addition to building its own Large Language Models and using AI across its products, Meta is also reportedly working on a standalone AI app. It has also stepped up lobbying efforts around the technology, and given its two cents what it deems to be risky AI applications -- such as those that can be weaponized (the implication being that what Meta builds is not risky, never!). Given Meta's track record, a move to build tools that fix immediate issues on its apps is probably the best approach to gaining acceptance of any new facial recognition features. And this test fit that bill: as we've said before, Meta has been accused for many years of failing to stop scammers misappropriating famous people's faces in a bid to use its ad platform to spin up scams like dubious crypto investments to unsuspecting users. Facial recognition has been one of the thornier areas for Meta over the years that it has worked with AI technology. Most recently, the company in 2024 agreed to pay $1.4 billion to settle a long-running lawsuit in Texas, where it was being sued over inappropriate biometric data collection related to its facial recognition technology. Before that, Facebook in 2021 shut down its decade-old facial recognition tool for photos, a feature that had faced multiple of regulatory and legal problems across many jurisdictions. But interestingly, at the time it confirmed that it would retain one part of the technology: its DeepFace model, which the company said it would incorporate into future technology. That could well be part of what is being built on with today's products.
Share
Share
Copy Link
Meta has received approval to launch facial recognition technology for anti-fraud measures in the UK, raising concerns about privacy and surveillance. The move comes amid growing use of facial recognition in law enforcement and debates over its implications.
Meta, the parent company of Facebook and Instagram, has confirmed the launch of its facial recognition technology in the United Kingdom and the European Union. This expansion comes after successful negotiations with regulators, marking a significant development in the use of AI for social media platforms 1.
The facial recognition tools introduced by Meta serve two primary purposes:
David Agranovich, Facebook's Director for Global Threat Disruption, stated, "We're constantly working on new ways to keep people safe while keeping bad actors out, and the measures we're rolling out this week utilize facial recognition technology to help us crack down on fake celebrity scams" 1.
Meta's expansion into the UK market followed engagement with local regulators. The company emphasized that both the celebrity protection feature and the video selfie verification for account recovery would be optional for users 2.
The introduction of facial recognition technology has sparked debates about privacy and potential surveillance overreach. Critics have drawn parallels to China's extensive surveillance system, which utilizes an estimated 700 million cameras equipped with facial recognition capabilities 1.
Madeleine Stone from Big Brother Watch, a privacy advocacy group, expressed concerns about the UK government's proposed bill to grant law enforcement access to driver's license photos for facial recognition purposes. She stated, "Not only would this be an unprecedented breach of privacy, but would also put innocent citizens at risk of misidentifications and injustice" 1.
The UK has seen a significant increase in the use of live facial recognition (LFR) systems by law enforcement. The Metropolitan Police reportedly deployed LFR 117 times between January and August 2024, scanning approximately 770,966 faces and resulting in over 360 arrests 1.
Despite the concerns raised by privacy advocates, a 2023 YouGov poll found that 57% of British adults supported the use of live facial recognition technology in public spaces by law enforcement, with only 28% opposing it 1.
In a related development, Ofcom, the British communications watchdog, is set to recommend that social media companies use facial recognition technology to verify users' ages, with guidance expected to be published in April 1.
As facial recognition technology continues to evolve and expand its reach, the debate over its benefits and potential risks is likely to intensify, shaping future policies and regulations in the UK and beyond.
Meta announces the reintroduction of facial recognition technology on Facebook and Instagram to combat scams and aid account recovery, sparking debates on privacy and data security.
14 Sources
14 Sources
Meta Platforms announces plans to utilize public posts from Facebook and Instagram users in the UK for AI model training. The move raises questions about data privacy and user consent.
16 Sources
16 Sources
Meta receives clearance from the UK's Information Commissioner's Office to use public posts from UK users for AI model training, sparking discussions on data privacy and AI development.
2 Sources
2 Sources
Meta has finally rolled out its AI assistant across Europe, offering text-based chat features in multiple languages. The launch comes after regulatory hurdles and privacy concerns delayed its initial plans.
18 Sources
18 Sources
Meta Platforms has announced a delay in launching its latest AI models in the European Union, citing concerns over unclear regulations. This decision highlights the growing tension between technological innovation and regulatory compliance in the AI sector.
13 Sources
13 Sources