Curated by THEOUTPOST
On Tue, 22 Oct, 8:02 AM UTC
14 Sources
[1]
Give Meta my face recognition data? I'd rather lose my Instagram account
Meta has just announced plans to bring back facial recognition technology to Facebook and Instagram. This time it's a security measure to help combat "celeb-bait" scams and restore access to compromised accounts. "We know security matters, and that includes being able to control your social media accounts and protect yourself from scams," wrote the Big Tech giant in a blog post published on Monday, October 21. Meta wants to use facial recognition technology to detect scammers who use images of public figures to carry out attacks. The company is proposing to compare images on adverts or suspicious accounts with celebrities' legitimate photos. The facial recognition tech will also allow regular Facebook and Instagram users to regain access to their own accounts if locked or hijacked. They'll be able to verify their identity through video selfies which can then be matched to their profile pictures. Handy, sure, but can I trust Meta with my biometrics? The Big Tech giant promises to take a "responsible approach" which includes encrypting video selfies for secure storage, deleting any facial data as soon as it's no longer needed, and not using these details for any other purpose. Yet, looking at Meta's track record regarding protecting and misusing its users' information and I'm concerned. Facebook's parent company has repeatedly breached the privacy, and trust, of its users in the past. The 2018 Cambridge Analytica scandal was probably the turning point. It shed light on how the personal information of up to 87 million Facebook users was misused for targeting political advertising, predominately during Donald Trump's 2016 presidential campaign. The company implemented significant changes around user data protection after that but Meta's privacy breaches have continued. Only this year Meta admitted to having scraped all Australian Facebook posts since 2007 to train its AI model without giving the option to opt-out. The company was also hit with a major fine (€91 million) in Europe for incorrectly storing social media account passwords in unencrypted databases. The year before, January 2023, Meta was hit by an even bigger fine (€390 million) for serving personalized ads without the option to opt out and illicit data handling practices. It's certainly enough to make me skeptical of Meta's good intentions and big promises. It's also worth noting that Meta itself decided to shut down its previous facial recognition system in 2021 over privacy concerns, promising to delete all the "faceprints" collected. Now, three years later, it's back on the agenda. "We want to help protect people and their accounts," wrote Meta in its official announcement, "and while the adversarial nature of this space means we won't always get it right, we believe that facial recognition technology can help us be faster, more accurate, and more effective. We'll continue to discuss our ongoing investments in this area with regulators, policymakers, and other experts." We won't always get it right - that's not very reassuring. So, something wrong is certain to happen at some point? If that's the case, no thanks, Meta, I don't trust you with my biometric data. I'd rather lose Facebook or Instagram account. What's the benefit of solving a problem to create an even bigger one? What's certain is that Mark Zuckerberg doesn't need to lose any sleep over EU fines over this for the time being. Meta's facial recognition tests aren't running globally. The company has excluded the UK and the EU markets. GDPR provides stringent privacy laws around personal information. Elsewhere, Meta's testing will eventually show whether or not the new security feature is the right solution to the growing issue of social media scams, or whether it becomes yet another privacy nightmare. Well, in the name of my privacy, I'm not sure it's worth the trouble of finding out.
[2]
Facebook and Instagram bring back facial recognition to 'protect people'
Facebook and Instagram have a problem. Well, they have many, many problems, but one of the ones they feel like addressing is "celeb-bait ads and impersonation." According to a new post from parent company Meta, the way they're going to try solving this is through the use of facial recognition technology. Again. Woo. In the lengthy post, Meta explains that the biggest impact of these new tools will be an expanded effort to stop scam accounts from impersonating celebrities. If you've used Facebook in the last year or so, you've probably encountered friend suggestions for attractive celebrities, which are obvious fakes that can be identified by their paparazzi photos and deliberate misspellings of their names. Now when Meta spots these impersonations, which are typically shilling spam or attempting to phish info out of unwary users, it'll employ facial recognition to compare them to their relevant celebrity's (real) Facebook or Instagram account. It'll be expanded to advertising, too, though the system will be automated (like almost everything on social media). Fine. That seems like a worthwhile application of this problematic tech. But what about if someone manages to hack your legitimate Facebook or Instagram account? Or you forget your password and lose access to your email account? Currently, you need to plead your case to Meta's support team by uploading some form of official ID. But the company is now testing a system where you can upload a video selfie instead, with facial recognition comparing you to your stored photos. The demonstration shows the user tilting their head at various angles to give the tool access to their entire face for scanning. Meta says that these videos will be "encrypted and stored securely," never posted publicly, and immediately deleted along with facial recognition tech once the process is completed. This isn't Facebook's first brush with facial recognition tech. It previously used a more basic system to automatically tag users in photos and videos, but shut down the opt-in tool in 2021 after privacy concerns were raised. This new implementation of the system is far less broad and more pointedly targeted at safety. That said, I wouldn't blame users for being skeptical of pretty much anything Meta and Facebook do at this point. After trying for years to disengage with Facebook's systems, I reluctantly returned earlier this year to try and engage with some local communities... only to be immediately flooded by a mountain of AI-generated garbage. A never-ending deluge of AI-generated "cozy cabins," Dodge Power Wagons, and pets and people just on the other side of the uncanny valley assaulted my news feed on a daily basis. I tried in vain to report these accounts... but with millions and millions of likes and shares, the majority of which I'm guessing were less than human, they just kept coming. So forgive me if I'm less than confident in Meta's ability to stem this tide of social media impersonation, let alone its actual intention to protect users. Maybe once the company shows as much interest in keeping AI bullshit off my app screen I'll have a little more faith in its dedication to authenticity. Since "deepfakes" are also easier to implement now, I have serious doubts about any automated system's capability to reliably distinguish between real people and scammers on a video.
[3]
Meta brings back face scanning to combat scams and account hacking
Facebook and Instagram are testing new facial recognition tools that could help users quickly restore compromised accounts and combat fake celebrity-endorsed scams. Meta announced its plan to roll out experimental features that can scan a user's face to verify their identity by comparing it against profile pictures on Facebook and Instagram. The first usage of these tools aim to protect both celebrities and everyday people from so-called "celeb-bait" ads that impersonate notable figures to trick users into visiting scam websites. Meta currently uses automated technology like machine learning to detect content that violates its policies but says celeb-bait can be difficult to distinguish from legitimate ads. "If our systems suspect that an ad may be a scam that contains the image of a public figure at risk for celeb-bait, we will try to use facial recognition technology to compare faces in the ad to the public figure's Facebook and Instagram profile pictures," Meta said in its announcement. "If we confirm a match and determine the ad is a scam, we'll block it." Celebrities will need a Facebook or Instagram profile to use the new facial recognition tools, but they've shown "promising results" for detection speed and efficacy in early testing with a small group of public figures, according to Meta. More celebrities who have been impacted by celeb-bait content will be automatically enrolled in the coming weeks, and will have the option to opt-out if they choose. Meta's facial recognition tools will also eventually allow Facebook and Instagram users to regain access to their locked accounts by submitting a video selfie, similar to authentication systems like Apple's Face ID. It's not clear when this feature will be available, but Nick Clegg, Meta's president of global affairs, says it's "starting small" and plans to "roll out these protections more widely in the months ahead." Meta says that uploaded selfie videos will be encrypted and "stored securely." The company also says that facial data used for comparisons is immediately deleted and isn't used for "any other purpose" -- though it's worth noting that Meta trains its AI models on almost everything that's publicly posted to its platforms. Meta previously integrated facial recognition tech into Facebook to identify and tag users in photographs and videos. That feature was discontinued in 2021 after a lengthy privacy battle. The company now says its new tools have been vetted for security and privacy, and are being discussed with regulators and policymakers.
[4]
Meta tests facial recognition for spotting 'celeb-bait' ads scams and easier account recovery
Meta is expanding tests of facial recognition as an anti-scam measure to combat celebrity scam ads and more broadly, the Facebook owner announced Monday. Monika Bickert, Meta's VP of content policy, wrote in a blog post that some of the tests aim to bolster its existing anti-scam measures, such as the automated scans (using machine learning classifiers) run as part of its ad review system, to make it harder for fraudsters to fly under its radar and dupe Facebook and Instagram users to click on bogus ads. "Scammers often try to use images of public figures, such as content creators or celebrities, to bait people into engaging with ads that lead to scam websites where they are asked to share personal information or send money. This scheme, commonly called 'celeb-bait,' violates our policies and is bad for people that use our products," she wrote. "Of course, celebrities are featured in many legitimate ads. But because celeb-bait ads are often designed to look real, it's not always easy to detect them." The tests appear to be using facial recognition as a back-stop for checking ads flags as suspect by existing Meta systems when they contain the image of a public figure at risk of so-called "celeb-bait." "We will try to use facial recognition technology to compare faces in the ad against the public figure's Facebook and Instagram profile pictures," Bickert wrote. "If we confirm a match and that the ad is a scam, we'll block it." Meta claims the feature is not being used for any other purpose than for fighting scam ads. "We immediately delete any facial data generated from ads for this one-time comparison regardless of whether our system finds a match, and we don't use it for any other purpose," she said. The company said early tests of the approach -- with "a small group of celebrities and public figures" (it did not specify whom) -- has shown "promising" results in improving the speed and efficacy of detecting and enforcing against this type of scam. Meta also told TechCrunch it thinks the use of facial recognition would be effective for detecting deepfake scam ads, where generative AI has been used to produce imagery of famous people. The social media giant has been accused for many years of failing to stop scammers misappropriating famous people's faces in a bid to use its ad platform to shill scams like dubious crypto investments to unsuspecting users. So it's interesting timing for Meta to be pushing facial recognition-based anti-fraud measures for this problem now, at a time when the company is simultaneously trying to grab as much user data as it can to train its commercial AI models (as part of the wider industry-wide scramble to build out generative AI tools). In the coming weeks Meta said it will start displaying in-app notifications to a larger group of public figures who've been hit by celeb-bait -- letting them know they're being enrolled in the system. "Public figures enrolled in this protection can opt-out in their Accounts Center anytime," Bickert noted. Meta is also testing use of facial recognition for spotting celebrity imposer accounts -- for example, where scammers seek to impersonate public figures on the platform in order to expand their opportunities for fraud -- again by using AI to compare profile pictures on a suspicious account against a public figure's Facebook and Instagram profile pictures. "We hope to test this and other new approaches soon," Bickert added. Video selfies plus AI for account unlocking Additionally, Meta has announced that it's trialling the use of facial recognition applied to video selfies to enable faster account unlocking for people who have been locked out of their Facebook/Instagram accounts after they've been taken over by scammers (such as if a person were tricked into handing over their passwords). This looks intended to appeal to users by promoting the apparent utility of facial recognition tech for identity verification -- with Meta implying it will be a quicker and easier way to regain account access than uploading an image of a government-issued ID (which is the usual route for unlocking access access now). "Video selfie verification expands on the options for people to regain account access, only takes a minute to complete and is the easiest way for people to verify their identity," Bickert said. "While we know hackers will keep trying to exploit account recovery tools, this verification method will ultimately be harder for hackers to abuse than traditional document-based identity verification." The facial recognition-based video selfie identification method Meta is testing will require the user to upload a video selfie that will then be processing using facial recognition technology to compare the video against profile pictures on the account they're trying to access. Meta claims the method is similar to identity verification used to unlock a phone or access other apps, such as Apple's FaceID on the iPhone. "As soon as someone uploads a video selfie, it will be encrypted and stored securely," Bickert added. "It will never be visible on their profile, to friends, or to other people on Facebook or Instagram. We immediately delete any facial data generated after this comparison regardless of whether there's a match or not." Conditioning users to upload and store a video selfie for ID verification could be one way for Meta to expand its offerings in the digital identity space -- if enough users opt in to uploading their biometrics. No tests in UK or EU -- for now All these tests of facial recognition are being run globally, per Meta. However the company noted, rather conspicuously, that tests are not currently taking in the U.K. or the European Union -- where comprehensive data protection regulations apply. (In the specific case of of biometrics for ID verification, the bloc's data protection framework demands explicit consent from the individuals concerned for such a use case.) Given this, Meta's tests appear to fit within a wider PR strategy it has mounted in Europe in recent months to try to pressurize local lawmakers to dilute citizens' privacy protections. This time, the cause it's invoking to press for unfettered data-processing-for-AI is not a (self-serving) notion of data diversity or claims of lost economic growth but the more straightforward goal of combating scammers. "We are engaging with the U.K. regulator, policymakers and other experts while testing moves forward," Meta spokesman Andrew Devoy told TechCrunch. "We'll continue to seek feedback from experts and make adjustments as the features evolve." However while use of facial recognition for a narrow security purpose might be acceptable to some -- and, indeed, might be possible for Meta to undertake under existing data protection rules -- using people's data to train commercial AI models is a whole other kettle of fish.
[5]
What to Know About Meta's Facial Recognition Plans
Facebook parent company Meta Platforms Inc. will start using facial recognition technology to crack down on scams that use pictures of celebrities to look more legitimate, a strategy referred to as "celeb-bait ads." Scammers use images of famous people to entice users into clicking on ads that lead them to shady websites, which are designed to steal their personal information or request money. Meta will start using facial recognition technology to weed out these ads by comparing the images in the post with the images from a celebrity's Facebook or Instagram account. "If we confirm a match and that the ad is a scam, we'll block it," Meta wrote in a blog post. Meta did not disclose how common this type of scam is across its services. With nearly 3.3 billion daily active users across all of its apps, Meta relies on artificial intelligence to enforce many of its content rules and guidelines. That has enabled Meta to better handle the deluge of daily reports about spam and other content that breaks the rules. It has also led to problems in the past when legitimate accounts have been unintentionally suspended or blocked due to automated errors. Read More: The Face Is the Final Frontier of Privacy Meta says it will also start using facial recognition technology to better assist users who get locked out of their accounts. As part of a new test, some users can submit a video selfie when they've been locked out of their accounts. Meta will then compare the video to the photos on the account to see if there is a match. Meta has previously asked locked-out users to submit other forms of identity verification, like an ID card or official certificate, but says that the video selfie option would only take a minute to complete. Meta will "immediately delete any facial data generated after this comparison regardless of whether there's a match or not," the company wrote in a blog. The social networking giant has a complicated history with facial recognition technology. It previously used facial recognition to identify users in uploaded photos as a way to encourage people to tag their friends and increase connections. Meta was later sued by multiple U.S. states for profiting off this technology without user consent, and in 2024 was ordered to pay the state of Texas $1.4 billion as part of the claim. Several years earlier, it agreed to pay $650 million in a separate legal suit filed in Illinois. The company will not run this video selfie test in Illinois or Texas, according to Monika Bickert, Meta's vice president of content policy.
[6]
Facebook owner Meta restarts facial recognition tech in 'celeb-bait' crackdown
Meta CEO Mark Zuckerberg holds a smartphone, as he makes a keynote speech at the Meta Connect annual event, at the company's headquarters in Menlo Park, California, U.S. September 25, 2024. Three years after Meta shut down facial recognition software on Facebook amid a groundswell of privacy and regulator pushback, the social media giant said on Tuesday it is testing the service again as part of a crackdown on "celeb bait" scams. Meta said it will enroll about 50,000 public figures in a trial which involves automatically comparing their Facebook profile photos with images used in suspected scam advertisements. If the images match and Meta believes the ad are scams, it will block them. The celebrities will be notified of their enrollment and can opt out if they do not want to participate, the company said. The company plans to roll out the trial globally from December, excluding some large jurisdictions where it does not have regulatory clearance such as Britain, the European Union, South Korea and the U.S. states of Texas and Illinois, it added. Monika Bickert, Meta's vice president of content policy, said in a briefing with journalists that the company was targeting public figures whose likenesses it had identified as having been used in scam ads. "The idea here is: roll out as much protection as we can for them. They can opt out of it if they want to, but we want to be able to make this protection available to them and easy for them," Bickert said. The test shows a company trying to thread the needle of using potentially invasive technology to address regulator concerns about rising numbers of scams while minimizing complaints about its handling of user data, which have followed social media companies for years. When Meta shuttered its facial recognition system in 2021, deleting the face scan data of one billion users, it cited "growing societal concerns." In August this year, the company was ordered to pay Texas $1.4 billion to settle a state lawsuit accusing it of collecting biometric data illegally. At the same time, Meta faces lawsuits accusing it of failing to do enough to stop celeb bait scams, which use images of famous people, often generated by artificial intelligence, to trick users into giving money to non-existent investment schemes. Under the new trial, the company said it will immediately delete any face data generated by comparisons with suspected advertisements regardless of whether it detected a scam. The tool being tested was put through Meta's "robust privacy and risk review process" internally, as well as discussed with regulators, policymakers and privacy experts externally before tests began, Bickert said. Meta said it also plans to test using facial recognition data to let non-celebrity users of Facebook and another one of its platforms, Instagram, regain access to accounts that have been compromised by a hacker or locked due to forgetting a password.
[7]
Facebook owner Meta restarts facial recognition tech in 'celeb-bait' crackdown
Three years after Meta shut down facial recognition software on Facebook amid a groundswell of privacy and regulator pushback, the social media giant said on Tuesday it is testing the service again as part of a crackdown on "celeb bait" scams. Meta said it will enroll about 50,000 public figures in a trial which involves automatically comparing their Facebook profile photos with images used in suspected scam advertisements. If the images match and Meta believes the ad are scams, it will block them. The celebrities will be notified of their enrollment and can opt out if they do not want to participate, the company said. The company plans to roll out the trial globally from December, excluding some large jurisdictions where it does not have regulatory clearance such as Britain, the European Union, South Korea and the U.S. states of Texas and Illinois, it added. Monika Bickert, Meta's vice president of content policy, said in a briefing with journalists that the company was targeting public figures whose likenesses it had identified as having been used in scam ads. "The idea here is: roll out as much protection as we can for them. They can opt out of it if they want to, but we want to be able to make this protection available to them and easy for them," Bickert said. The test shows a company trying to thread the needle of using potentially invasive technology to address regulator concerns about rising numbers of scams while minimising complaints about its handling of user data, which have followed social media companies for years. When Meta shuttered its facial recognition system in 2021, deleting the face scan data of one billion users, it cited "growing societal concerns". In August this year, the company was ordered to pay Texas $1.4 billion to settle a state lawsuit accusing it of collecting biometric data illegally. At the same time, Meta faces lawsuits accusing it of failing to do enough to stop celeb bait scams, which use images of famous people, often generated by artificial intelligence, to trick users into giving money to non-existent investment schemes. Under the new trial, the company said it will immediately delete any face data generated by comparisons with suspected advertisements regardless of whether it detected a scam. The tool being tested was put through Meta's "robust privacy and risk review process" internally, as well as discussed with regulators, policymakers and privacy experts externally before tests began, Bickert said. Meta said it also plans to test using facial recognition data to let non-celebrity users of Facebook and another one of its platforms, Instagram, regain access to accounts that have been compromised by a hacker or locked due to forgetting a password.
[8]
Meta's Facial Recognition Technology -- Scam Prevention or AI Surveillance Tool?
Concerns remain about the privacy and security of facial recognition technology. Meta is deploying facial recognition technology to fight the infestation of celebrity deepfake scam advertisements on Facebook and Instagram. The move comes as the social media giant battles criticism over the rising number of AI-powered deepfakes on its platform. However, experts remain skeptical about the safety of the firm's development of facial recognition technology. Meta's Facial Recognition Beginning in December 2024, Meta's new trial will use 50,000 celebrities to reveal advertisements impersonating their likeness. When the technology finds an advertisement with one of the celebrities, it will compare the video or image to Facebook and Instagram profile pictures. Meta will delete the advertisement if the two pieces of content match up and it is detected as a scam. Meta's motivation is to boost the speed at which they can tackle the growing number of AI-powered scams on their platforms. Monika Bickert, Meta's VP of content policy, wrote in a blog post that early testing with a small group of celebrities and public figures had shown promising results in increasing the speed at which it could detect and enforce against deepfake scams. In an independent project designed to show how easily bad actors can misuse facial recognition, two Harvard students turned Meta's smart glasses into a tool for undercover monitoring. Anh Phu Nguyen and Caine Ardayfio used a pair of Ray-Ban Meta smart glasses and public databases to identify passersby in real-time. Keiichi Nakata, a Professor of Social Informatics at Henley Business School, told CCN that the way facial recognition technology is collected and stored remains a major issue. "Facial recognition technology uses personal data that cannot be altered - compared to PIN numbers for personal identification that can be more easily changed," Nakata said. "Ethical concerns include how the data is collected, managed, and stored, and how these are used - for example, are they used in the way that is acceptable to users and in a responsible manner?" Meta's Scam Problem The social media giant faces increasing pressure from lawmakers to tackle the rising number of scams on its platforms. High-profile celebrities such as Brad Pitt, Cristiano Ronaldo, and Taylor Swift have appeared in fabricated endorsements across Facebook, Instagram, and Messenger. In 2021, Meta announced it would significantly scale back its use of facial recognition technology due to mounting criticism and concerns over privacy and ethical issues. "There are many concerns about the place of facial recognition technology in society, and regulators are still in the process of providing a clear set of rules governing its use. Amid this ongoing uncertainty, we believe that limiting the use of facial recognition to a narrow set of use cases is appropriate," Meta said in a statement. The decision came after years of backlash from privacy advocates and lawmakers over the potential for misuse of the technology.
[9]
Facebook owner Meta restarts facial recognition tech in 'celeb-bait' crackdown
SYDNEY/NEW YORK, Oct 22 (Reuters) - Three years after Meta (META.O), opens new tab shut down facial recognition software on Facebook amid a groundswell of privacy and regulator pushback, the social media giant said on Tuesday it is testing the service again as part of a crackdown on "celeb bait" scams. Meta said it will enroll about 50,000 public figures in a trial which involves automatically comparing their Facebook profile photos with images used in suspected scam advertisements. If the images match and Meta believes the ad are scams, it will block them. Advertisement · Scroll to continue The celebrities will be notified of their enrollment and can opt out if they do not want to participate, the company said. The company plans to roll out the trial globally from December, excluding some large jurisdictions where it does not have regulatory clearance such as Britain, the European Union, South Korea and the U.S. states of Texas and Illinois, it added. Monika Bickert, Meta's vice president of content policy, said in a briefing with journalists that the company was targeting public figures whose likenesses it had identified as having been used in scam ads. Advertisement · Scroll to continue "The idea here is: roll out as much protection as we can for them. They can opt out of it if they want to, but we want to be able to make this protection available to them and easy for them," Bickert said. The test shows a company trying to thread the needle of using potentially invasive technology to address regulator concerns about rising numbers of scams while minimising complaints about its handling of user data, which have followed social media companies for years. When Meta shuttered its facial recognition system in 2021, deleting the face scan data of one billion users, it cited "growing societal concerns". In August this year, the company was ordered to pay Texas $1.4 billion to settle a state lawsuit accusing it of collecting biometric data illegally. At the same time, Meta faces lawsuits accusing it of failing to do enough to stop celeb bait scams, which use images of famous people, often generated by artificial intelligence, to trick users into giving money to non-existent investment schemes. Under the new trial, the company said it will immediately delete any face data generated by comparisons with suspected advertisements regardless of whether it detected a scam. The tool being tested was put through Meta's "robust privacy and risk review process" internally, as well as discussed with regulators, policymakers and privacy experts externally before tests began, Bickert said. Meta said it also plans to test using facial recognition data to let non-celebrity users of Facebook and another one of its platforms, Instagram, regain access to accounts that have been compromised by a hacker or locked due to forgetting a password. Reporting by Byron Kaye in SYDNEY and Katie Paul in NEW YORK; Editing by Stephen Coates Our Standards: The Thomson Reuters Trust Principles., opens new tab
[10]
Meta is testing facial recognition to fight celebrity scams
The days of Tom Hanks having to issue Instagram warnings about fake AI videos of himself may hopefully be coming to an end. Facebook and Instagram owner Meta is now working on facial recognition techniques to try and curb the rise in "celeb-bait scams", as well as help users recover their accounts quicker. "We're testing a new way of detecting celeb-bait scams," wrote Meta in a blog published on Monday. "If our systems suspect that an ad may be a scam that contains the image of a public figure at risk for celeb-bait, we will try to use facial recognition technology to compare faces in the ad to the public figure's Facebook and Instagram profile pictures. If we confirm a match and determine the ad is a scam, we'll block it. We immediately delete any facial data generated from ads for this one-time comparison, regardless of whether our system finds a match, and we don't use it for any other purpose." The post went on to say that Meta has had success in early testing phases with a small group of celebrities. "In the coming weeks, we'll start showing in-app notifications to a larger group of public figures who've been impacted by celeb-bait letting them know we're enrolling them in this protection. Public figures enrolled in this protection can opt-out in their Accounts Center anytime." As well as combatting scams, Meta confirmed it's also testing out video selfies as a means of aiding access recovery for anyone with a compromised account, not just famous people. "The user will upload a video selfie and we'll use facial recognition technology to compare the selfie to the profile pictures on the account they're trying to access," wrote Meta. "This is similar to identity verification tools you might already use to unlock your phone or access other apps." The company confirmed that facial recognition data is immediately deleted after the comparison is made.
[11]
Meta Testing FRT To Find Scammers And Verify User Identity
Disclaimer: This content generated by AI & may have errors or hallucinations. Edit before use. Read our Terms of use Meta is testing the use of Facial Recognition Technology (FRT) to detect online scams and help users regain access to compromised accounts, the company revealed in a blog post on October 21. The tech would be used mostly against accounts and advertisements that impersonate public figures and to confirm the identity of users who wanted to log back into their accounts. Meta also made promises to delete any facial data generated during this process and not use it for any other purpose. Preventing Celeb-Bait Ads: The company describes the practice of 'celeb-bait,' where scammers use images of celebrities to bait people into engaging with ads that lead to scam websites, and ask them to share personal information or send money. To resolve this problem, Meta is experimenting with new systems that use facial recognition to compare a celebrity's images in a suspected celeb-bait ad to their profile pictures on Facebook and Instagram. "If we confirm a match and determine the ad is a scam, we'll block it," said the post. Meta will be automatically including celebrities and public figures in their program, with an opt-out option available. The blog post also described a pattern of scammers impersonating public figures through fake accounts and duping people into sending them money. "For example, scammers may claim that a celebrity has endorsed a specific investment offering or ask for sensitive personal information in exchange for a free giveaway," said the post. The company is planning to use FRT to detect similar fake accounts as well. Regaining Access To Compromised Accounts: Meta is also testing video selfies as a way for people to confirm their identity and regain access to compromised accounts. "The user will upload a video selfie and we'll use facial recognition technology to compare the selfie to the profile pictures on the account they're trying to access," said the post. Meta also assured users that it will encrypt and store the video selfie securely, and delete any facial data generated. Background: This marks a return to FRT for Meta, albeit in a limited sense. The company (then Facebook) used to offer an opt-in "faceprint" program, which would scan a user's face and search for other photos of them on the platform. It was also able to detect fake accounts that used somebody else's photograph. Nearly a third of Facebook's users, or around 1 billion people, had signed up for the program. However, Meta shut the service in 2021, and deleted the collected facial data of users. The company stated that there were many concerns about the use of FRT in society with a lack of a clear regulatory framework. Meta is already facing pressure from various public figures regarding deep fakes and impersonation scams. In June this year, Australian billionaire Andrew Forrest sued the platform over advertisements using his image to to promote fake cryptocurrency and other fraudulent investments. The Australian Competition and Consumer Commission (ACCC) has also taken Meta to court over misleading and deceptive ads that feature images of public figures. Potential Problems The blog post does raise a few eyebrows, however. When it comes to celeb-bait ads, FRT can very well tell when a person's image is present in an ad, but it cannot actually determine whether an ad is a scam or not. While using somebody's image without their permission is illegal, it does not make something a scam in itself. For accounts that impersonate celebrities to scam people, Meta's new FRT tech is somewhat redundant. Most celebrity accounts already have the verified 'blue tick,' which means that any other account bearing images of the person are most likely fake. It could be argued whether facial recognition adds anything here. Furthermore, Meta's video selfies, which are supposed to help people confirm their identity and regain access to compromised accounts, have their own set of loopholes. What happens when a hacker replaces a user's profile picture with his own? What if you never had a profile picture, or if your face wasn't clearly visible? FRT would be helpless here. The same applies to people who have not updated their pictures. Also Read:
[12]
Meta to Use Facial Recognition to Fight Fake Celebrity Scams
(Bloomberg) -- Facebook parent company Meta Platforms Inc. will start using facial recognition technology to crack down on scams that use pictures of celebrities to look more legitimate, a strategy referred to as "celeb-bait ads." Scammers use images of famous people to entice users into clicking on ads that lead them to shady websites, which are designed to steal their personal information or request money. Meta will start using facial recognition technology to weed out these ads by comparing the images in the post with the images from a celebrity's Facebook or Instagram account. "If we confirm a match and that the ad is a scam, we'll block it," Meta wrote in a blog post. Meta did not disclose how common this type of scam is across its services. With nearly 3.3 billion daily active users across all of its apps, Meta relies on artificial intelligence to enforce many of its content rules and guidelines. That has enabled Meta to better handle the deluge of daily reports about spam and other content that breaks the rules. It has also led to problems in the past when legitimate accounts have been unintentionally suspended or blocked due to automated errors. Meta says it will also start using facial recognition technology to better assist users who get locked out of their accounts. As part of a new test, some users can submit a video selfie when they've been locked out of their accounts. Meta will then compare the video to the photos on the account to see if there is a match. Meta has previously asked locked-out users to submit other forms of identity verification, like an ID card or official certificate, but says that the video selfie option would only take a minute to complete. Meta will "immediately delete any facial data generated after this comparison regardless of whether there's a match or not," the company wrote in a blog. The social networking giant has a complicated history with facial recognition technology. It previously used facial recognition to identify users in uploaded photos as a way to encourage people to tag their friends and increase connections. Meta was later sued by multiple US states for profiting off this technology without user consent, and in 2024 was ordered to pay the state of Texas $1.4 billion as part of the claim. Several years earlier, it agreed to pay $650 million in a separate legal suit filed in Illinois. The company will not run this video selfie test in Illinois or Texas, according to Monika Bickert, Meta's vice president of content policy.
[13]
Facebook Owner Meta Restarts Facial Recognition Tech in 'Celeb-Bait' Crackdown
SYDNEY/NEW YORK (Reuters) - Three years after Meta shut down facial recognition software on Facebook amid a groundswell of privacy and regulator pushback, the social media giant said on Tuesday it is testing the service again as part of a crackdown on "celeb bait" scams. Meta said it will enroll about 50,000 public figures in a trial which involves automatically comparing their Facebook profile photos with images used in suspected scam advertisements. If the images match and Meta believes the ad are scams, it will block them. The celebrities will be notified of their enrollment and can opt out if they do not want to participate, the company said. The company plans to roll out the trial globally from December, excluding some large jurisdictions where it does not have regulatory clearance such as Britain, the European Union, South Korea and the U.S. states of Texas and Illinois, it added. Monika Bickert, Meta's vice president of content policy, said in a briefing with journalists that the company was targeting public figures whose likenesses it had identified as having been used in scam ads. "The idea here is: roll out as much protection as we can for them. They can opt out of it if they want to, but we want to be able to make this protection available to them and easy for them," Bickert said. The test shows a company trying to thread the needle of using potentially invasive technology to address regulator concerns about rising numbers of scams while minimising complaints about its handling of user data, which have followed social media companies for years. When Meta shuttered its facial recognition system in 2021, deleting the face scan data of one billion users, it cited "growing societal concerns". In August this year, the company was ordered to pay Texas $1.4 billion to settle a state lawsuit accusing it of collecting biometric data illegally. At the same time, Meta faces lawsuits accusing it of failing to do enough to stop celeb bait scams, which use images of famous people, often generated by artificial intelligence, to trick users into giving money to non-existent investment schemes. Under the new trial, the company said it will immediately delete any face data generated by comparisons with suspected advertisements regardless of whether it detected a scam. The tool being tested was put through Meta's "robust privacy and risk review process" internally, as well as discussed with regulators, policymakers and privacy experts externally before tests began, Bickert said. Meta said it also plans to test using facial recognition data to let non-celebrity users of Facebook and another one of its platforms, Instagram, regain access to accounts that have been compromised by a hacker or locked due to forgetting a password. (Reporting by Byron Kaye in SYDNEY and Katie Paul in NEW YORK; Editing by Stephen Coates)
[14]
Meta combats celebrity scam ads with face recognition tech
Facebook and Instagram owner Meta is to introduce facial recognition technology to try and crack down on scammers who fraudulently use celebrities in adverts. Elon Musk and personal finance expert, Martin Lewis, are among those to fall victim to such scams, which typically promote investment schemes and crypto-currencies. Mr Lewis previously told the Today programme, on BBC Radio 4, that he receives "countless" reports of his name and face being used in such scams every day, and had been left feeling "sick" by them. Meta already uses an ad review system which uses artificial intelligence (AI) to detect fake celebrity endorsements but is now seeking to beef it up with facial recognition tech. It will work by comparing images from ads flagged as being dubious with celebrities' Facebook or Instagram profile photos. If the image is a confirmed to be a match, and the ad a scam, it will be automatically deleted. Meta said "early testing" of the system had shown "promising results" so it would now start showing in-app notifications to a larger group of public figures who had been impacted by so-called "celeb-bait."
Share
Share
Copy Link
Meta announces the reintroduction of facial recognition technology on Facebook and Instagram to combat scams and aid account recovery, sparking debates on privacy and data security.
Meta, the parent company of Facebook and Instagram, has announced plans to reintroduce facial recognition technology on its platforms. This move comes as part of an effort to enhance security measures and combat scams, particularly those involving celebrity impersonation 1.
One of the primary applications of this technology will be to detect and prevent "celeb-bait" scams. These scams use images of public figures to lure users into clicking on fraudulent advertisements. Meta's new system will compare faces in suspicious ads to the legitimate profile pictures of celebrities on Facebook and Instagram 2.
Meta is also testing a feature that allows users to regain access to locked or compromised accounts through video selfie verification. This process will use facial recognition to compare the video selfie with the user's profile pictures, potentially offering a faster alternative to traditional document-based identity verification methods 3.
The reintroduction of facial recognition technology has raised concerns about privacy and data security. Meta has promised to take a "responsible approach," including encrypting video selfies, deleting facial data immediately after use, and not using this information for any other purpose 4.
Meta's history with facial recognition technology has been complicated. The company previously used the technology for photo tagging but discontinued it in 2021 due to privacy concerns. Meta has also faced legal challenges related to its use of facial recognition, including a $1.4 billion settlement with the state of Texas 5.
The new facial recognition features are being tested globally, with notable exceptions in the United Kingdom and European Union, where stringent data protection regulations are in place. This cautious approach reflects Meta's awareness of the regulatory landscape surrounding biometric data usage 1.
Meta has stated that public figures enrolled in the celeb-bait protection system can opt-out through their Account Center. The company emphasizes its commitment to transparency and user control over their data 4.
As Meta moves forward with these new security measures, the company faces the challenge of balancing technological innovation with user privacy concerns. The effectiveness of these tools in combating scams and improving account security will be closely watched by users, regulators, and industry observers alike 2.
Reference
[4]
Meta Platforms announces plans to utilize public posts from Facebook and Instagram users in the UK for AI model training. The move raises questions about data privacy and user consent.
16 Sources
Meta is testing AI-generated posts in Facebook and Instagram feeds, raising concerns about user experience and content authenticity. The move has sparked debate about the role of artificial intelligence in social media platforms.
4 Sources
Meta plans to use AI to identify underage users on Instagram and automatically move them to teen accounts with enhanced privacy settings, addressing concerns about social media's impact on youth mental health.
8 Sources
A viral social media post claiming to protect user data from Meta's AI has been debunked as a hoax. Experts warn that such posts spread misinformation and do not affect data privacy on platforms like Facebook and Instagram.
11 Sources
Meta faces scrutiny from Australian authorities over its use of user data for AI training. The company has admitted to scraping posts and photos from Facebook users since 2007 without explicit consent, raising privacy concerns.
8 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved