7 Sources
[1]
Zoom teams up with World to verify humans in meeting | TechCrunch
Meeting platform Zoom has announced a partnership with World, Sam Altman's human ID verification company, to ensure that the people attending meetings are actually human and not AI-generated imposters. The threat is real and growing fast. The most dramatic example came in early 2024, when engineering firm Arup lost $25 million after an employee in Hong Kong authorized a series of wire transfers during what appeared to be a routine video call with the company's CFO and several colleagues. Every person on that call -- except the victim -- turned out to be an AI-generated deepfake. A similar attack hit a multinational firm in Singapore in 2025. Across the board, financial losses from deepfake-enabled fraud exceeded $200 million in just the first quarter of last year, according to one estimate, and the average loss per corporate incident now tops $500,000, according to security industry reports. So while deepfake video-call fraud may not be something most people ever encounter personally, it represents a serious risk for businesses, especially those that regularly conduct high-value transactions over video. World noted that while some efforts already exist to catch deepfakes in meetings, they are limited to analyzing video frames for telltale signs of AI manipulation. Both companies said that because video models are getting better, those frame-by-frame detection methods are increasingly unreliable. For this new feature, World uses its World ID Deep Face tech, which takes a three-pronged approach to verifying that a participant is a real person. It cross-references a signed image taken at the time of the user's registration through World's Orb device, a real-time face scan from the user's device, and a live video frame visible to other meeting participants. It only verifies someone when all three things match, at which point a "Verified Human" badge appears on that participant's title. (Yes, life is getting weird.) Zoom said that hosts can enable a Deep Face waiting room to require all participants to verify their identity. Participants can also request mid-call that someone verify themselves on the spot. "This integration is part of Zoom's open ecosystem approach, giving customers more ways to build trust into their workflows based on what matters most for their use case," Zoom spokesperson Travis Isaman said via email. Beyond Zoom, Altman's World has been building partnerships with a range of consumer platforms, including Tinder and Visa, for human verification. Last month, it released tech to verify that real humans, rather than automated AI programs, are behind AI shopping agents at the point of purchase.
[2]
Zoom adds World ID verification to prove meeting participants are human, not deepfakes
Summary: Zoom has partnered with World, Sam Altman's biometric identity company, to let meeting participants verify they are human using World's Deep Face technology, which cross-references iris-scanned biometric profiles with live video to display a "Verified Human" badge. The feature responds to deepfake fraud that cost businesses over $200 million in Q1 2025 alone, including a $25 million loss at engineering firm Arup, though World's iris-scanning Orb system faces ongoing regulatory action in Spain, Germany, the Philippines, and several other countries. Zoom has partnered with World, the biometric identity company co-founded by Sam Altman, to let meeting participants prove they are real humans and not AI-generated deepfakes. The integration uses World's Deep Face technology to cross-reference a participant's live video feed against their iris-scanned biometric profile, and displays a "Verified Human" badge next to their name when the match succeeds. Hosts can enable a Deep Face waiting room that requires verification before anyone joins, and participants can request that someone verify themselves mid-call. The feature addresses a threat that has moved from theoretical to expensive. In early 2024, engineering firm Arup lost $25 million after an employee in Hong Kong authorised a series of wire transfers during a video call in which every other participant turned out to be an AI-generated deepfake of his colleagues, including the company's CFO. A similar attack hit a multinational firm in Singapore in 2025. Across the industry, deepfake-enabled fraud exceeded $200 million in losses in the first quarter of 2025 alone, and the average loss per corporate incident now tops $500,000. World's Deep Face takes a three-pronged approach. It cross-references a signed image captured during the user's original registration through World's Orb device, a spherical biometric scanner that photographs iris patterns, with a real-time face scan from the user's phone or computer and a live video frame visible to other meeting participants. Verification only succeeds when all three inputs match. The process runs locally on the participant's device, and World says no personal data leaves the phone. This is architecturally different from the deepfake detection tools already available on Zoom's marketplace. Products from Pindrop, Reality Defender, and Resemble AI analyse video frames for telltale signs of AI manipulation, flagging synthetic media in real time. Both Zoom and World said that because video generation models are improving rapidly, those frame-by-frame detection methods are becoming increasingly unreliable. Deep Face sidesteps the detection problem entirely by verifying the person's identity against a biometric record rather than trying to determine whether the pixels on screen were generated by software. The trade-off is that Deep Face requires participants to have a World ID, which means they must have visited one of World's physical Orb devices to have their irises scanned. The network currently has around 18 million verified users across 160 countries and roughly 1,500 active Orbs. That is a small fraction of Zoom's user base, which limits the feature's immediate utility. For most meetings, the existing frame-analysis tools will remain the practical option. Deep Face is designed for high-stakes calls where identity certainty justifies the friction of requiring biometric pre-registration. Zoom's spokesperson Travis Isaman described the integration as part of the company's "open ecosystem approach, giving customers more ways to build trust into their workflows based on what matters most for their use case." The framing is deliberate. Zoom is not endorsing World ID as its default identity layer; it is offering it as one option among several in a marketplace that already includes multiple deepfake detection and identity verification tools. For Zoom, the partnership is defensive. The company's revenue reached $4.67 billion in fiscal 2025, growing at a modest 3%, and its strategic challenge is to remain the default platform for business communication as competitors add AI features across the board. Zoom has responded with AI avatars, an AI-powered office suite, and cross-application AI notetakers. Adding human verification addresses a different vector: making Zoom the platform that enterprises trust for sensitive conversations. In a market where a single deepfake call can cost $25 million, that trust has a measurable commercial value. For World, the Zoom integration is a distribution win. The company, which rebranded from Worldcoin in 2024, has struggled to move beyond crypto-adjacent early adopters. Its partnerships with Visa, Tinder, Razer, and Coinbase have expanded the contexts in which a World ID is useful, but none of those integrations create the kind of immediate, visceral demand that a corporate security use case does. If a company's treasury team requires World ID verification for any video call involving wire transfer authorisation, that creates institutional adoption that individual consumer partnerships do not. World's Orb-based identity system has faced sustained regulatory scrutiny. Spain's data protection authority issued a formal warning in February 2026 citing GDPR violations and insufficient data protection assessments. Germany's Bavarian data regulator ordered the deletion of iris data in December 2024. The Philippines issued a cease-and-desist order in October 2025 for obtaining consent through financial incentives. Investigations or suspensions have occurred in Argentina, Kenya, Hong Kong, and Indonesia. The governance frameworks emerging around biometric AI in 2026, including the EU AI Act's high-risk classification for biometric identification systems, add further complexity. World maintains that its zero-knowledge proof architecture means verification happens without exposing personal data, and that iris images are encrypted and stored only on the user's device. Critics argue that the collection process itself, requiring a physical visit to an Orb to have your eyes scanned, creates risks that privacy-preserving cryptography does not fully address, particularly when recruitment has disproportionately targeted lower-income communities. For enterprises evaluating the Zoom integration, the calculus is whether the security benefit of biometric human verification outweighs the regulatory and reputational risk of requiring employees or counterparties to register with a company that multiple data protection authorities have sanctioned. That calculation will differ by jurisdiction and by industry. A Wall Street trading desk conducting a $100 million deal over Zoom may decide the risk is worth it. A European public-sector organisation almost certainly will not. The Zoom-World partnership is a marker of how far the deepfake threat has advanced. Two years ago, the Arup incident was treated as an extraordinary outlier. Today, deepfake-enabled fraud is a billion-dollar category, AI-generated video is sophisticated enough to defeat frame-analysis detection, and the question of whether the person on a video call is real has become a legitimate enterprise security concern. The solution Zoom and World are proposing, biometric identity verification anchored to iris scans, works technically but introduces its own set of complications around privacy, regulatory compliance, and the barrier to adoption that physical Orb registration creates. It is a feature for specific, high-value use cases rather than a default setting for every Monday morning stand-up. But the fact that Zoom considers it worth integrating at all tells you something about where the technology landscape is heading: toward a future where proving you are human is no longer something you can take for granted, even when you are looking someone in the eye.
[3]
'The face thing is probably going to break' -- Sam Altman-backed firm warns AI will soon outgrow facial recognition, but says its 'proof of human' system World ID could be part of the solution
Facial recognition has become one of the default ways we prove who we are online, from unlocking our phones to logging into banking apps. But according to a senior figure at a Sam Altman-backed startup, that entire system may not hold for much longer, thanks to AI. "Over time the AI is going to get so powerful that really, the face thing is probably going to break," says Tiago Sanda, Chief Product Officer for Tools for Humanity, as I catch up with him to discuss the latest upgrade to its World ID system. The company behind the controversial Orb device, is rolling out new ways to use its "proof of human" system in a world where AI-generated faces, voices, and identities are getting harder to spot by the day. How the Orb works If you cast your mind back to about a year ago you'll remember the Orb -- essentially it was a fancy camera inside a round case -- that could verify that you were human and give you a World ID to prove it. The Orb has a bunch of sensors inside. Some of them are similar to what's inside your iPhone, like near- and far-spectrum infrared, but it also has cameras and a very powerful Nvidia chip inside it. So, it's able to look at you and figure out if it is a real person that it's looking at, right now. Rather than own an Orb, you simply locate your nearest one in a mall or coffee shop and visit it to get verified. Your World ID then lives on your phone, and lasts a few years, like a driving licence. The problem was, there wasn't much you could do with it -- but from today that's starting to change. World ID has had a "protocol upgrade", so it's capable of more and there are a bunch of new partnerships launching, so that you can finally use it to prove you're human and restore trust in a lot of the apps you use on a daily basis. Tinder, Reddit and Zoom "Last year we started piloting with Tinder in Japan, and the pilot has done really well, so they're going to be announcing a global rollout of human verification on Tinder to prevent catfishing", says Tiago. "Reddit also recently announced that they're starting to test World ID for proof of humanity. And we're going to introduce a new product called Concert Kit, that is a tool that artists can use to reserve some of the tickets for their concerts for verified humans, to protect them from scalping bots." Concert Kit will be compatible with all the major ticketing platforms, including Ticketmaster, Eventbrite, Fixr and others. Artists such as Thirty Seconds to Mars will be using it as part of their next European tour, and it will roll out during the Bruno Mars World Tour featuring DJ Pee Wee (aka Anderson .Paak), where verified humans will have exclusive access to VIP suite experiences at select stops. Businesses can also benefit from World ID integrations -- DocuSign is introducing it so you know it's a human signing your documents, while Zoom will be using its deep fake protection system called World ID Deep Face to prove you're talking to a human, not a deep fake. Tools for Humanity is even introducing a set of products called Agent Kit for AI agents, so that it can verify they're acting on behalf of a real human when they do whatever you've asked them to. Why face recognition might not be enough One of the key questions is why systems such as Face ID aren't already sufficient? If Face ID on your iPhone is good enough for online banking apps, why do you need to go further? "Face ID is really good for authentication, but not for verification. If you try hard enough, you're going to be able to break that with AI or a mask, or something like that", Tiago replies. "Over time the AI is going to get so powerful that really, the face thing is probably going to break." While somebody wearing a Mission Impossible-style mask of your face to break your facial recognition software is unlikely to be a problem the average person will face, we are all at risk of AI's ability to generate entirely believable digital humans at scale. But what about people who don't want to be part of a system like World ID? Are we moving towards a future where people will be forced to prove they are human? "No, so we definitely think that it's something that should be optional. Rather than gating the product, our partners use it to boost the experience. So, for example, Tinder, gives you five extra boosts if you're a verified human, because they know they can trust you, right? But you can certainly continue using Tinder without that. I think that's what we see across all of our, all of our partners." What it means for the future That answer gives me some comfort about the dangers of a dystopian future where a whole population needs to be catalogued and verified to function in society. OpenAI's CEO, Sam Altman, is one of the backers for Tools for Humanity, which puts him in the unusual position of helping to fund both the rapid advancement of AI and a system designed to defend against its consequences. Regardless, the problem of AI deep fakes is only going to get worse; there's an entire industry pushing forward at speed. At first glance, the Orb looks like something out of a Pixar movie. But its appearance obscures its critical usefulness to society. The Orb might look like a gimmick, but if the "face thing" really does start to break, systems like it could become far more relevant than they first appear. While I was initially skeptical of the need for a futuristic object to verify me as human, after talking to Tiago I'm starting to think that it's something we're all going to have to take seriously in the future. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds.
[4]
Sam Altman's "proof of human" company pushes into mainstream services
* But as AI agents proliferate, companies are increasingly looking for ways to verify not just who users are, but whether a real human is behind an online interaction at all. Driving the news: World upgraded the protocol behind its identity tool, World ID, and is open-sourcing it so any app can integrate it as an authentication layer. * The company is also launching a standalone World ID app, where users can store credentials and use them to log into other services. Between the lines: The announcement bundles together a range of previously introduced ideas -- from AI agent verification tools to non-biometric sign-in options -- as World tries to push its technology into more mainstream use. * World argues that verifying humans is becoming more urgent as AI companies roll out new agents and work towards AGI -- making it harder to distinguish AI from real people. * "When anything can be fake, you don't know who and what to trust," Tiago Sada, chief product officer at Tools for Humanity, which develops World, told Axios. How it works: World ID is designed to function more like a CAPTCHA replacement than a traditional identity system, Sada said. * The protocol has three-tiers for how users can validate their identities: taking a selfie, submitting an official government-issued ID, and going in-person to an "orb" to scan your iris. * Each company that uses World ID to verify someone's "humanness" decides which level of verification they require. Zoom in: World is now leaning on partnerships to drive adoption. * Zoom plans to integrate World ID to help verify participants on video calls and guard against deepfake impersonation. * DocuSign is testing World ID as a way to confirm that a real human -- not a bot or compromised account -- is behind a digital signature. * Okta and Vercel are working with World on tools to verify that a real human approved certain actions taken by AI systems. * Tinder is expanding a previous pilot in Japan to the U.S., allowing users to verify that a real person is behind a profile. * VanEck is testing an in-office "orb" for employee verification. * World is also launching a "Concert Kit" tool designed to help artists reserve tickets for verified humans and cut down on bot-driven ticket scalping. By the numbers: About 17.9 million people have signed up for World ID globally, according to the company. * The Wall Street Journal reported last month that roughly 1.1 million of those users are in North America. Yes, but: Analysts have called the program "problematic on many levels," due to the security and governance concerns. What to watch: World will soon expand the number of "orbs" available in San Francisco, New York City and Los Angeles so most people in those cities are within about 5-10 minutes from one, Sada said. * World also plans to bring its "orb-on-demand" service to San Francisco after piloting it in Argentina last year, Sada added.
[5]
Zoom will now check if you are a human or an AI imposter during video meetings
Biometric badges, iris scans, and AI bouncers: welcome to the future of your Monday morning standup. Zoom video calls just got a new kind of awkward small feature. The platform will now ask you whether you're human. It has partnered with World, Sam Altman's iris-scanning identity company (previously known as Worldcoin), to add real-time human verification inside meetings. The feature, launched on April 17, 2026, is a part of World's ID 4.0 rollout. It lets hosts confirm that every face on the call belongs to a real person, not an AI-generated imposter. How does the "verified human" badge actually work? For those wondering how World's Deep Face technology works, it includes a three-step process. It cross-references a signed image from a user's original Orb registration, a live face scan from the device, and the frame of the video that's visible to the other participants in the meeting. Recommended Videos Only when the three samples match does a "Verified Human" badge appear next to the user's name. To me, it feels weird and ironic that I'd need to prove that I'm a human, just to be seen as one in a Zoom meeting. Hosts can also make Deep Face verification mandatory for joining meetings, preventing unverified participants from joining entirely. Mid-call, on-the-spot checks are also possible. So, whether you think your colleague is looking a bit funny, or you simply want to annoy someone, you can demand a check in real time. Why is this even necessary? Simple: deepfake fraud is no longer something that you hear about from a friend's friend or something that you read about in weekend blogs. In early 2024, engineering firm Arup lost $25 million after an employee in Hong Kong authorized wire transfers during a video call, where everyone except the victim turned out to be a deepfake. Something similar happened with a multinational firm in Singapore in 2025. Moreover, financial losses from deepfake-enabled fraud exceeded $200 million in the first quarter of last year alone. The threat is no longer hypothetical; it's something that a growing number of people and enterprises are facing. The direction is clear: biometric proof of personhood is becoming a workplace norm by the day.
[6]
Sam Altman's World Teams With Zoom, Tinder to Better Verify Humans in the AI Age - Decrypt
World -- formerly Worldcoin -- unveiled a major World ID upgrade on Friday, introducing account-based architecture for "proof of human" verification alongside new integrations with Tinder, Zoom, and Docusign. World, which was co-founded by OpenAI CEO Sam Altman, introduced the World ID app as a dedicated experience for managing and using proof-of-human verification across the internet. The standalone application represents a shift from the company's previous wallet-integrated approach to identity verification. The company is arguably best known for its iris-scanning Orb device, which scans and helps verify humans for use across an array of online applications. World incentivized use of the Orb with its Worldcoin (WLD) crypto token -- which has fallen about 10% on the day to a recent price of $0.286 -- but has expanded its proof-of-human suite to include other forms of verification. The company on Friday also launched Concert Kit, a tool powered by World ID that enables artists to reserve tickets for verified humans. The platform aims to combat bot-driven ticket scalping by requiring human verification for event access. Grammy-winning musician Anderson .Paak appeared at Friday's event to help reveal the technology. World and Vercel are teaming to bring human-in-the-loop verification to developers building on Vercel's new open source Workflow SDK. Okta plans to build Human Principal, a product allowing API builders to verify whether a human stands behind an agent. Match Group, Tinder's parent company, is expanding its existing World ID partnership to serve U.S. users, while World announced business-centric agreements with Zoom and Docusign. Zoom, the video meeting app, will integrate World's deepfake detection technology to try and spot fakes, with fund manager VanEck among the firms currently trialing the tech. Meanwhile, Docusign will offer World ID support to ensure that whoever is supposed to digitally sign a document is actually the person they claim to be. The launch comes as Worldcoin's network has reached 18 million verified humans across 160 countries. The timing reflects growing urgency around human verification as a growing share of internet traffic comes from AI chatbots, agents, and bots. "Proof of human and verified human identity vaulted to a critical priority for social networks and banking and financial systems as AI and agentic-AI capabilities experienced an exponential step forward in the past few months," said Tom Lee, Chairman of Ethereum treasury firm BitMine Immersion Technologies and board member of Worldcoin treasury firm Eightco, in a statement. (Disclosure: Lee is an investor in Dastan, the parent company of an editorially independent Decrypt.)
[7]
Zoom Partners With Sam Altman's World To Tackle AI Imposters In Meetings - Visa (NYSE:V), Zoom Communicat
World, which was developed in 2023 by Tools for Humanity, a technology company co-founded in 2019 by Altman and Alex Blania. How It Works The feature uses World's World ID Deep Face technology, which employs a three-step process to confirm a participant's authenticity. This includes cross-referencing a signed image from the user's registration, a real-time face scan from the user's device, and a live video frame visible to other meeting participants. A "Verified Human" badge will appear on a participant's title only when all three verifications match. According to Zoom, Deep Face Waiting Room requires participants to verify they are real humans before joining. Participants can also request mid-call verification from others. Trevor Traina, chief business officer at Tools for Humanity, said, "As AI continues to blur the line between real and synthetic, establishing trust online becomes essential." Expanding the Trust Ecosystem The rise of AI-generated deepfakes has been a growing concern, particularly in the realm of financial fraud. Earlier in 2024, an employee at engineering firm Arup was tricked into sending $25 million to scammers during a video call that appeared to involve the company's CFO and other trusted colleagues. Photo Courtesy: Shutterstock Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
Share
Copy Link
Zoom has partnered with Sam Altman's World to integrate human verification technology into video meetings. The new feature uses World's Deep Face technology to confirm participants are real people, not AI-generated imposters. The move responds to escalating deepfake fraud that cost businesses over $200 million in Q1 2025 alone, including a $25 million loss at engineering firm Arup.
Zoom has announced a partnership with World, Sam Altman's biometric identity company, to integrate human verification directly into video meetings
1
. The feature, launched on April 17, 2026, addresses a threat that has moved from theoretical to financially devastating: AI-generated deepfakes impersonating real people during business calls5
. Using World ID and World's Deep Face technology, meeting participants can now display a Verified Human badge next to their name, confirming they are real people rather than imposters2
.
Source: The Next Web
The integration represents a significant shift in how online identity verification works for digital interactions. Hosts can enable a Deep Face waiting room that requires all participants to verify their identity before joining, and participants can request mid-call that someone verify themselves on the spot
1
. This proof of human system marks Zoom's defensive move to maintain trust as competitors add AI features across the board.
Source: TechCrunch
The threat of deepfake fraud has materialized into concrete financial losses that justify the friction of biometric verification. In early 2024, engineering firm Arup lost $25 million after an employee in Hong Kong authorized wire transfers during what appeared to be a routine video call with the company's CFO and several colleagues
1
. Every person on that call except the victim turned out to be an AI-generated deepfake. A similar attack hit a multinational firm in Singapore in 20252
.Across the industry, financial losses from deepfake-enabled fraud exceeded $200 million in just the first quarter of 2025, according to estimates
1
. The average loss per corporate incident now tops $500,000, according to security industry reports2
. While deepfake video-call fraud may not be something most people encounter personally, it represents a serious risk for businesses that regularly conduct high-value transactions over video.World's Deep Face takes a three-pronged approach to verifying that a participant is a real person
1
. It cross-references a signed image captured during the user's original registration through World's Orb device—a spherical biometric scanner that photographs iris patterns—with a real-time face scan from the user's phone or computer and a live video frame visible to other meeting participants2
. Verification only succeeds when all three inputs match. The process runs locally on the participant's device, and World says no personal data leaves the phone.
Source: Axios
This architectural approach differs fundamentally from existing deepfake detection tools already available on Zoom's marketplace, such as products from Pindrop, Reality Defender, and Resemble AI, which analyze video frames for telltale signs of AI manipulation
2
. Both Zoom and World noted that because video generation models are improving rapidly, those frame-by-frame detection methods are becoming increasingly unreliable. Deep Face sidesteps the detection problem entirely by verifying the person's identity against a biometric record rather than trying to determine whether the pixels on screen were generated by software.The trade-off for this enhanced security is that Deep Face requires participants to have a World ID, which means they must have visited one of World's physical Orb devices to have their iris scans recorded
2
. The network currently has around 18 million verified users across 160 countries and roughly 1,500 active Orbs2
. About 17.9 million people have signed up for World ID globally, with roughly 1.1 million of those users in North America4
. That represents a small fraction of Zoom's user base, which limits the feature's immediate utility.For World, the Zoom integration is a distribution win as the company, which rebranded from Worldcoin in 2024, has struggled to move beyond crypto-adjacent early adopters
2
. Tools for Humanity, the company behind the Orb, is rolling out new ways to use its proof of human system across multiple platforms3
. Tinder is expanding human verification globally to prevent catfishing, DocuSign is testing World ID to confirm a real human is behind digital signatures, and Okta and Vercel are working with World on tools to verify that a real human approved certain actions taken by AI agents4
.Related Stories
Tiago Sada, Chief Product Officer for Tools for Humanity, warns that traditional facial recognition systems may soon become obsolete. "Over time the AI is going to get so powerful that really, the face thing is probably going to break," Sada explained
3
. While Face ID on devices like iPhones works well for authentication, it may not be sufficient for verification as AI capabilities advance. "If you try hard enough, you're going to be able to break that with AI or a mask, or something like that," he added.World argues that verifying humans is becoming more urgent as AI companies roll out new AI agents and work towards AGI, making it harder to distinguish AI from real people
4
. "When anything can be fake, you don't know who and what to trust," Sada told Axios. The protocol upgrade behind World ID positions it to function more like a CAPTCHA replacement than a traditional identity system, with three tiers for how users can validate their identities: taking a selfie, submitting an official government-issued ID, and going in-person to an Orb for iris scans4
.For Zoom, the partnership is defensive. The company's revenue reached $4.67 billion in fiscal 2025, growing at a modest 3%, and its strategic challenge is to remain the default platform for business communication
2
. Adding human verification addresses making Zoom the platform that enterprises trust for sensitive conversations. In a market where a single deepfake call can cost $25 million, that trust has measurable commercial value. "This integration is part of Zoom's open ecosystem approach, giving customers more ways to build trust into their workflows based on what matters most for their use case," Zoom spokesperson Travis Isaman said1
.World plans to expand the number of Orbs available in San Francisco, New York City, and Los Angeles so most people in those cities are within about 5-10 minutes from one
4
. The company also plans to bring its "orb-on-demand" service to San Francisco after piloting it in Argentina. Despite these expansion efforts, the system faces ongoing regulatory action in Spain, Germany, the Philippines, and several other countries2
. As bots and AI agents proliferate, the direction is clear: biometric proof of personhood is becoming a workplace norm, transforming how we establish trust in digital interactions.Summarized by
Navi
[2]
25 Apr 2026•Technology

10 Oct 2024•Technology

17 Apr 2026•Technology

1
Technology

2
Policy and Regulation

3
Policy and Regulation
