Curated by THEOUTPOST
On Wed, 9 Apr, 12:02 AM UTC
4 Sources
[1]
Fake job seekers are flooding U.S. companies that are hiring for remote positions, tech CEOs say
An image provided by Pindrop Security shows a fake job candidate the company dubbed "Ivan X," a scammer using deepfake AI technology to mask his face, according to Pindrop CEO Vijay Balasubramaniyan. When voice authentication startup Pindrop Security posted a recent job opening, one candidate stood out from hundreds of others. The applicant, a Russian coder named Ivan, seemed to have all the right qualifications for the senior engineering role. When he was interviewed over video last month, however, Pindrop's recruiter noticed that Ivan's facial expressions were slightly out of sync with his words. That's because the candidate, whom the firm has since dubbed "Ivan X," was a scammer using deepfake software and other generative AI tools in a bid to get hired by the tech company, said Pindrop CEO and co-founder Vijay Balasubramaniyan. "Gen AI has blurred the line between what it is to be human and what it means to be machine," Balasubramaniyan said. "What we're seeing is that individuals are using these fake identities and fake faces and fake voices to secure employment, even sometimes going so far as doing a face swap with another individual who shows up for the job." Companies have long fought off attacks from hackers hoping to exploit vulnerabilities in their software, employees or vendors. Now, another threat has emerged: Job candidates who aren't who they say they are, wielding AI tools to fabricate photo IDs, generate employment histories and provide answers during interviews. The rise of AI-generated profiles means that by 2028 globally 1 in 4 job candidates will be fake, according to research and advisory firm Gartner. The risk to a company from bringing on a fake job seeker can vary, depending on the person's intentions. Once hired, the impostor can install malware to demand ransom from a company, or steal its customer data, trade secrets or funds, according to Balasubramaniyan. In many cases, the deceitful employees are simply collecting a salary that they wouldn't otherwise be able to, he said.
[2]
Fake job seekers are flooding U.S. companies that are hiring for remote positions, tech CEOs say
When voice authentication startup Pindrop Security posted a recent job opening, one candidate stood out from hundreds of others. The applicant, a Russian coder named Ivan, seemed to have all the right qualifications for the senior engineering role. When he was interviewed over video last month, however, Pindrop's recruiter noticed that Ivan's facial expressions were slightly out of sync with his words. That's because the candidate, whom the firm has since dubbed "Ivan X," was a scammer using deepfake software and other generative AI tools in a bid to get hired by the tech company, said Pindrop CEO and co-founder Vijay Balasubramaniyan. "Gen AI has blurred the line between what it is to be human and what it means to be machine," Balasubramaniyan said. "What we're seeing is that individuals are using these fake identities and fake faces and fake voices to secure employment, even sometimes going so far as doing a face swap with another individual who shows up for the job." Companies have long fought off attacks from hackers hoping to exploit vulnerabilities in their software, employees or vendors. Now, another threat has emerged: Job candidates who aren't who they say they are, wielding AI tools to fabricate photo IDs, generate employment histories and provide answers during interviews. The rise of AI-generated profiles means that by 2028 globally 1 in 4 job candidates will be fake, according to research and advisory firm Gartner. The risk to a company from bringing on a fake job seeker can vary, depending on the person's intentions. Once hired, the impostor can install malware to demand ransom from a company, or steal its customer data, trade secrets or funds, according to Balasubramaniyan. In many cases, the deceitful employees are simply collecting a salary that they wouldn't otherwise be able to, he said. Cybersecurity and cryptocurrency firms have seen a recent surge in fake job seekers, industry experts told CNBC. As the companies are often hiring for remote roles, they present valuable targets for bad actors, these people said. Ben Sesser, the CEO of BrightHire, said he first heard of the issue a year ago and that the number of fraudulent job candidates has "ramped up massively" this year. His company helps more than 300 corporate clients in finance, tech and health care assess prospective employees in video interviews. "Humans are generally the weak link in cybersecurity, and the hiring process is an inherently human process with a lot of hand-offs and a lot of different people involved," Sesser said. "It's become a weak point that folks are trying to expose." But the issue isn't confined to the tech industry. More than 300 U.S. firms inadvertently hired impostors with ties to North Korea for IT work, including a major national television network, a defense manufacturer, an automaker, and other Fortune 500 companies, the Justice Department alleged in May. The workers used stolen American identities to apply for remote jobs and deployed remote networks and other techniques to mask their true locations, the DOJ said. They ultimately sent millions of dollars in wages to North Korea to help fund the nation's weapons program, the Justice Department alleged. That case, involving a ring of alleged enablers including an American citizen, exposed a small part of what U.S. authorities have said is a sprawling overseas network of thousands of IT workers with North Korean ties. The DOJ has since filed more cases involving North Korean IT workers. Fake job seekers aren't letting up, if the experience of Lili Infante, founder and chief executive of CAT Labs, is any indication. Her Florida-based startup sits at the intersection of cybersecurity and cryptocurrency, making it especially alluring to bad actors. "Every time we list a job posting, we get 100 North Korean spies applying to it," Infante said. "When you look at their resumes, they look amazing; they use all the keywords for what we're looking for." Infante said her firm leans on an identity-verification company to weed out fake candidates, part of an emerging sector that includes firms such as iDenfy, Jumio and Socure. The fake employee industry has broadened beyond North Koreans in recent years to include criminal groups located in Russia, China, Malaysia and South Korea, according to Roger Grimes, a veteran computer security consultant. Ironically, some of these fraudulent workers would be considered top performers at most companies, he said. "Sometimes they'll do the role poorly, and then sometimes they perform it so well that I've actually had a few people tell me they were sorry they had to let them go," Grimes said. His employer, the cybersecurity firm KnowBe4, said in October that it inadvertently hired a North Korean software engineer. The worker used AI to alter a stock photo, combined with a valid but stolen U.S. identity, and got through background checks, including four video interviews, the firm said. He was only discovered after the company found suspicious activity coming from his account. Despite the DOJ case and a few other publicized incidents, hiring managers at most companies are generally unaware of the risks of fake job candidates, according to BrightHire's Sesser. "They're responsible for talent strategy and other important things, but being on the front lines of security has historically not been one of them," he said. "Folks think they're not experiencing it, but I think it's probably more likely that they're just not realizing that it's going on." As the quality of deepfake technology improves, the issue will be harder to avoid, Sesser said. As for "Ivan X," Pindrop's Balasubramaniyan said the startup used a new video authentication program it created to confirm he was a deepfake fraud. While Ivan claimed to be located in western Ukraine, his IP address indicated he was actually from thousands of miles to the east, in a possible Russian military facility near the North Korean border, the company said. Pindrop, backed by Andreessen Horowitz and Citi Ventures, was founded more than a decade ago to detect fraud in voice interactions, but may soon pivot to video authentication. Clients include some of the biggest U.S. banks, insurers and health companies. "We are no longer able to trust our eyes and ears," Balasubramaniyan said. "Without technology, you're worse off than a monkey with a random coin toss."
[3]
How Fake AI Job Applications Pose Risks for Employers
The network cites a story from Pindrop Security, an Atlanta, Georgia-based voice authentication startup. The company's team grew suspicious of one particular applicant because during his video interview for the senior engineering role there was a mismatch between his facial expressions and the words he spoke. It turns out the applicant, dubbed "Ivan X," was using deepfake generative AI tools to cover his identity in an effort to get hired. CEO and co-founder Vijay Balasubramaniyans said "Gen AI has blurred the line between what it is to be human and what it means to be machine," and added that he's seeing people create fake identities, hiding behind fake voices, and even using face-swapping tech. Research firm Gartner, CNBC reported, predicts that by 2028 1 in every 4 job applicants will be fake. The phenomenon isn't limited to cybersecurity jobs like those offered by Pindrop, either. In May the Justice Department alleged that over 300 U.S. companies had accidentally hired impostors to work remote IT-related jobs. The workers were actually tied to North Korea, ultimately sending millions of dollars in wages back home, which the Department alleged would be used to help fund the authoritarian nation's weapons program. More recent reports say this deepfake North Korean scheme has actually expanded, targeting companies across the globe.
[4]
Fraudsters Use Generative AI Tools to Secure Remote Jobs | PYMNTS.com
Generative AI tools are reportedly powering a new threat to companies: job seekers who aren't who they say they are. Using artificial intelligence (AI) tools to create false profiles, photo IDs, employment histories and even deepfake videos for interviews, these fraudsters aim to secure remote jobs, CNBC reported Tuesday (April 8). In these jobs, they can then steal the company's data, trade secrets or funds; install malware and then demand a ransom; or, in some cases, collect a salary that they can give to the North Korean government, according to the report. These scams often target cybersecurity and cryptocurrency firms but are also common across industries, the report said. By 2028, 1 in 4 job candidates will be fake, the report said, citing research and advisory firm Gartner. Firms that have encountered fake job seekers have deployed solutions to prevent it from happening again. These include using identity-verification companies to vet candidates and video authentication programs to spot deepfake videos, per the report. Remote hiring, onboarding and training are some of the toughest tests faced by employers, according to the 2021 PYMNTS Intelligence and Jumio collaboration, "Digital Identity Tracker®." The report found that during the pandemic, digital identity verification solutions emerged as valuable tools for employers to remotely hire and onboard new workers and to replace cumbersome manual processes. Businesses are harnessing AI to bolster security measures and combat increasingly sophisticated cyber threats, PYMNTS reported in May. For example, Proofpoint addresses the increasing risks associated with business email compromise (BEC) and malicious URLs with a predelivery defense system that protects against social engineering tactics and malicious links. "Organizations need a simple, unified and effective way to catch every threat, every time, every way a user may encounter it, using every form of detection," Darren Lee, executive vice-president and general manager of People Protection Group, Proofpoint, said at the time in a press release. Businesses are a prime target for bad actors and scammers, PYMNTS reported in April 2024. The FBI Internet Crime Report said that in 2023, business email compromise (BEC) attacks in the U.S. resulted in $2.9 billion in adjusted losses annually, while malware attacks represented $59.6 million in adjusted losses annually.
Share
Share
Copy Link
Tech companies report a surge in AI-generated fake job applicants, posing significant risks to employers and highlighting the need for enhanced verification processes in remote hiring.
In a startling development, tech CEOs and cybersecurity experts are sounding the alarm on a new threat to companies: AI-powered fake job seekers. These fraudsters are leveraging advanced generative AI tools to create convincing fake identities, complete with fabricated photo IDs, employment histories, and even deepfake video interviews 12.
Vijay Balasubramaniyan, CEO of Pindrop Security, highlighted a recent incident where a Russian coder named "Ivan X" applied for a senior engineering role using deepfake technology. The deception was only uncovered when recruiters noticed slight inconsistencies in the candidate's facial expressions during the video interview 1.
The issue is not isolated to a few cases. According to research firm Gartner, by 2028, an estimated one in four job candidates globally will be fake 12. This trend poses significant risks to companies, especially those hiring for remote positions in cybersecurity, cryptocurrency, and IT sectors 3.
The potential consequences of hiring these impostors are severe. Once employed, they may:
In a more alarming development, the U.S. Justice Department alleged that over 300 American firms inadvertently hired impostors with ties to North Korea for IT work. These workers, using stolen American identities, sent millions of dollars in wages back to North Korea, potentially funding the nation's weapons program 23.
Companies are scrambling to adapt to this new threat. Some firms are turning to identity-verification companies and deploying advanced video authentication programs to spot deepfakes 4. For instance, Pindrop Security used its own video authentication program to confirm that "Ivan X" was indeed a fraud 1.
Ben Sesser, CEO of BrightHire, notes that the number of fraudulent job candidates has "ramped up massively" this year. His company assists over 300 corporate clients in finance, tech, and healthcare with assessing prospective employees in video interviews 2.
This trend is forcing a reevaluation of remote hiring practices. As Sesser points out, "Humans are generally the weak link in cybersecurity, and the hiring process is an inherently human process with a lot of hand-offs and a lot of different people involved" 2.
The challenge extends beyond just identifying fake applicants. In some cases, these fraudulent workers perform their roles exceptionally well, making detection even more difficult. Roger Grimes, a computer security consultant, noted instances where companies were reluctant to let go of high-performing employees who turned out to be impostors 2.
As deepfake technology continues to improve, the issue is likely to become more prevalent and harder to detect. Many hiring managers remain unaware of the risks, highlighting the need for increased awareness and improved security measures in the hiring process 24.
The rise of AI-powered fake job seekers represents a significant challenge at the intersection of technology, cybersecurity, and human resources. As companies increasingly rely on remote work and digital hiring processes, the need for robust verification systems and heightened vigilance has never been more critical.
Reference
[1]
[2]
Deepfake technology is increasingly being used to target businesses and threaten democratic processes. This story explores the growing prevalence of deepfake scams in the corporate world and their potential impact on upcoming elections.
2 Sources
2 Sources
As deepfake technology becomes more sophisticated, tech companies are developing advanced detection tools to combat the growing threat of AI-generated scams and disinformation.
3 Sources
3 Sources
AI tools are reshaping the job application process, offering both advantages and potential pitfalls for job seekers and recruiters alike. While these tools can streamline applications, they also raise concerns about authenticity and fairness in hiring.
3 Sources
3 Sources
The FBI has issued an alert about the increasing use of generative AI by criminals to enhance fraud schemes, urging the public to adopt new protective measures against these sophisticated threats.
4 Sources
4 Sources
As AI technology advances, scammers are using sophisticated tools to create more convincing frauds. Learn about the latest AI-enabled scams and how to safeguard yourself during the holidays.
7 Sources
7 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved